[P4Runtime/BMv2] Packets dropped despite inserting TableEntry via Python controller

Hello everyone,

I am working on a project using Mininet, BMv2 (simple_switch_grpc), and P4Runtime. My goal is to migrate from static table configuration (using s1-runtime.json files loaded at startup) to a dynamic Python controller.

The Problem: When I use the static JSON files, connectivity works fine (pingall succeeds). However, when I clear the static configuration and try to insert the exact same table entries using my Python P4Runtime script, the packets are dropped by the switch.

My Setup:

  • Topology: Simple linear or pod topology (h1 connected to s1).

  • P4 Program: Basic L3 forwarding. It has an ipv4_lpm table for routing and an ipv4_src table with a direct_counter for stats.

  • Controller: Python script using p4runtime_lib.

P4 Snippet (Ingress):
**
table ipv4_lpm {
key = { hdr.ipv4.dstAddr: lpm; }
actions = { ipv4_forward; drop; NoAction; }
size = 1024;
default_action = NoAction();
}

Python Controller Logic:** I am connecting to the switch and inserting a forwarding rule for h2 (10.0.2.2):

Building the entry for dst=10.0.2.2 → output_port=2

table_entry = p4info_helper.buildTableEntry(
table_name=“MyIngress.ipv4_lpm”,
match_fields={
“hdr.ipv4.dstAddr”: [“10.0.2.2”, 32]
},
action_name=“MyIngress.ipv4_forward”,
action_params={
“dstAddr”: “00:00:00:00:02:00”,
“port”: 2
}
)
s1.WriteTableEntry(table_entry)

Behavior:

  1. I run h1 ping h2.

  2. The entry is successfully written to the switch (no errors in Python output).

  3. However, ping fails.

My Question: Is it possible that I am missing the reverse path rule (for 10.0.1.1), causing the Echo Reply to be dropped? Or does P4Runtime require explicit handling of ARP packets if the static ARP entries from Mininet are not sufficient?

Any guidance on debugging P4Runtime table misses would be appreciated.

Thank you!

It is definitely true that if you want a host to get back an echo reply packet back in response to sending out an echo request packet, that the switches must be able to forward packets between the two hosts in both directions. Whether your current code is neglecting to install the necessary entries, I cannot tell from what you have posted (although the fact that you thought to ask suggests you might not be).

I would need to review the example to be more certain (sorry, I haven’t done that yet), but I believe the same goes for static ARP entries in the switches. That is, the P4 program you are using cannot forward a packet to a host unless it has a table entry containing the destination MAC address to use in Ethernet headers of packets sent to that host.

Thanks, Andy — you were right.

I was only installing the forward path at first. After I:

inserted both LPM entries via the controller

dst=10.0.2.2 → port 2, dst MAC 08:00:00:00:02:22

dst=10.0.1.1 → port 1, dst MAC 08:00:00:00:01:11

kept static ARP on the hosts (the P4 program doesn’t do ARP; it relies on the dst MAC provided in the L2 rewrite),

and removed the preloaded runtime_json so the controller is the single source of truth,

ICMP echo works in both directions.

I’m now moving on to reading per-src traffic with a direct counter on ipv4_src.

Update: Forwarding via the Python controller now works (ping succeeds both ways). The remaining issue is reading the per-src direct counter bound to MyIngress.ipv4_src: my first approach (looping over ReadTableEntries(...) and checking entry.table_entry.counter_data) always printed zeros.

From what I understand, with direct counters you must read DirectCounterEntry objects, not table entries. Two concrete questions + a working snippet:

1) Correct way to read a direct counter

Is this the recommended pattern on this toolchain (BMv2 + P4Runtime 1.0)?

‘’’ python code:
from p4.v1 import p4runtime_pb2

ipv4_src_id = p4info_helper.get_tables_id(“MyIngress.ipv4_src”)
cnt_id = p4info_helper.get_direct_counters_id(“MyIngress.cnt_src”)

First enumerate the exact table entries (to get their keys)

for resp in s1.ReadTableEntries(table_id=ipv4_src_id):
for ent in resp.entities:
te = ent.table_entry # this has the match {hdr.ipv4.srcAddr=…}

    # Build a DirectCounterEntry tied to THIS table entry
    dce_msg = p4runtime_pb2.Entity()
    dce      = dce_msg.direct_counter_entry
    dce.direct_counter_id = cnt_id
    dce.table_entry.CopyFrom(te)

    # Issue the read for this direct counter
    read = p4runtime_pb2.ReadRequest(device_id=s1.device_id)
    read.entities.add().direct_counter_entry.CopyFrom(dce)

    for r in s1.client.Read(read):
        for e in r.entities:
            data = e.direct_counter_entry.data
            # parse te.match to print srcAddr alongside the numbers
            print("pkts=", data.packet_count, "bytes=", data.byte_count)

‘’’

This works for me; please confirm this is the canonical way (vs any shorthand “read all” for direct counters).

  1. Do I need a no-op action to “register a hit”?

I currently use a benign action:

‘’’ p4 code
action count_packet() { }
table ipv4_src {
key = { hdr.ipv4.srcAddr: exact; }
actions = { count_packet; NoAction; }
counters = cnt_src; // direct_counter
}
‘’’

My understanding:

  • Direct counters increment on table hit of a real entry (not the default action).
  • The specific action chosen doesn’t matter for the counter; but if I rely on the default action only, there’s no bound entry → no counter increment.

Is that accurate for BMv2/v1model?

What should I try first?

Since ping works but my reads show zeros, before I dive deeper I’d appreciate your recommended minimal debugging steps. For example:

Is the DirectCounterEntry read above the canonical approach?

Any shorthand to “read all” direct counters for a table?

Any common gotchas around re-loading pipeline config (clearing tables/counters), or exact-match keys for hdr.ipv4.srcAddr?

Thanks again for the pointers!