Had a quick question regarding the speed of digest messages: on bmv2 switches, what’s the normal propagation time between when a digest message is generated by the data plane and when it arrives at the P4Runtime API client?
I can add some code here at a later point, but it seems like it’s taking as long as 1 second for a digest message to reach my runtime api client from the time that the message is generated by the switch. Was curious if this is expected
I am not sure where in the implementation to look to confirm the following guess, but here is one possibility to explain what is happening:
One of the reasons that the Digest extern exists is to “batch” together small Digest messages into groups of such messages, and deliver these all at once to the control plane software, so that its message processing rate can be lower than once per packet that generates a Digest message.
It is possible that the BMv2 implementation has some code that sets a timer when it receives a first Digest message, and does not generate a message to a P4Runtime API client until some time delay has passed (e.g. maybe something like the 1 second of time you are measuring), just in case more Digest messages arrive soon afterwards and can be batched together with the first one.
If this is the case, and one could find the lines of code that cause this to happen, it might be very straightforward to either disable this timer, or to reduce it to a smaller value that you find acceptable (e.g. 100 millisec, perhaps).
Alternately, if you are OK with the P4Runtime API client having to process a separate message for every packet that causes a Digest message right now, you could change your code to use a packet clone/mirror operation instead of a Digest operation. I would bet that this would have a lower latency, unless the implementation also has a similar time waiting period there to try to batch clone/mirror packets together (but I am not aware of anything like that in the implementation). If you do not want the entire contents of large payload packets to be cloned/mirrored, most clone/mirror operations in P4 support the option of truncating the packet to a fairly small size, e.g. 32 or 64 bytes starting at the beginning of the Ethernet frame.
I can peek into the bmv2 repo and look into that “timer” theory you’ve got – thanks for the pointer!
Quick question on the mirroring though – in order to relay packet to the P4RuntimeAPI, even if I mirror a packet, won’t I have to use a Digest operation? Guess I got a little confused when you mentioned a clone/mirror option “instead” of a Digest operation
To be more complete, any packet sent to the CPU port in BMv2 will become a PacketIn P4Runtime API message to the client. If you clone/mirror a packet and send the clone/mirror to the CPU port, and put whatever metadata fields you want in a to-CPU header of your custom desired metadata fields, your client software will get it. You can see an example of such a program for BMv2 software switch here:
In addition, any PacketOut message from the P4runtime API client will go to the P4Runtime API server, and then be sent into the device as a packet input on the CPU port.
As a likely place in the behavioral-model source code involving digests that involves a 1 sec delay, see the 1000 millisec timeout value here: behavioral-model/src/bm_sim/learning.cpp at main · p4lang/behavioral-model (github.com)
Note that the digest message capability is often called “learn” or “learning” in the behavioral-model source code, because one of the most common applications in fixed-function L2 switches for such a capability is for implementing an L2 learning bridge . However, the digest extern in P4-programmable devices is NOT limited to this application – you can use it to send whatever values you like from the data plane to the control plane, not only those that are useful for a learning bridge implementation.
 There may be better references for a learning bridge, but this is one I found quickly: Microsoft PowerPoint - 123f11_Lec7 (ucsd.edu)
Forgot to follow up on this. This helped immensely! Thank you Andy