Added a Custom Header for INT after the Ethernet Header: Behavioral Model Working; Not working on the target;

#include <core.p4>
#include <xsa.p4>

...

const bit<16> HNEW_TYPE  = 0x1996;                                                       //NEW HEADER CUSTOM 

...

// ****************************************************************************** //
// *************************** H E A D E R S  *********************************** //
// ****************************************************************************** //

...

//========================== ADDING THE NEW HEADER FOR THE TIMER ===============================================================

header timer_t {
    bit<64> ingress_timing;                                                                //TO CAPTURE THE INGRESS TIMEStAMP
    bit<16> intermediate_type;                                                             //NEXT PROTOCOL IDENTIFIER
}

//====================================================================================================================================

...


// ****************************************************************************** //
// ************************* S T R U C T U R E S  ******************************* //
// ****************************************************************************** //

// header structure
struct headers {
    eth_mac_t    eth;
    timer_t      timer;                                       //NEW HEADER TO BE ADDED
    vlan_t[2]    vlan;
    ipv4_t       ipv4;
    ipv4_opt_t   ipv4opt;
    ipv6_t       ipv6;
    tcp_t        tcp;
    tcp_opt_t    tcpopt;
    udp_t        udp;
}

...

// ****************************************************************************** //
// *************************** P A R S E R  ************************************* //
// ****************************************************************************** //

...

// ****************************************************************************** //
// **************************  P R O C E S S I N G   **************************** //
// ****************************************************************************** //
...

    apply {
        
        ...

        //========================== ADDING THE NEW HEADER FOR THE TIMER AND SETTING IT VALID ===============================================================

        if (hdr.eth.isValid()){
            hdr.timer.setValid();                                                                                             //TO SET THE NEW HEADER TO BE VALID
            hdr.timer.ingress_timing = smeta.ingress_timestamp;                                                             //ASSIGNING THE INGRESS TIMESTAMP VALUE TO THE TIMER HEADER

            hdr.timer.intermediate_type = hdr.eth.type;                                                                       //ASSIGNING THE PROTOCOL ID OF NEXT HEADER AFTER THE ETHERNET HEADER (VLAN, IPv4, IPv6, etc) TO THE PROTOCOL TYPE OF NEW HEADER FOR PARSING IN DESTINATION
            hdr.eth.type = HNEW_TYPE;                                                                                         //ASSIGNING THE CUSTOM PROTOCOL ID OF NEW HEADER TO THE PROTOCOL TYPE OF ETHERNET HEADER FOR PARSING IN DESTINATION 
        }
        
        //===================================================================================================================================================    

...
 

// ****************************************************************************** //
// ***************************  D E P A R S E R  ******************************** //
// ****************************************************************************** //


control MyDeparser(packet_out packet, 
                   in headers hdr,
                   inout metadata meta, 
                   inout standard_metadata_t smeta) {
    apply {
        packet.emit(hdr.eth);
        packet.emit(hdr.timer);                                                             //TO EMIT THE ADDED HEADER
        packet.emit(hdr.vlan);
        packet.emit(hdr.ipv4);
        packet.emit(hdr.ipv4opt);
        packet.emit(hdr.ipv6);
        packet.emit(hdr.tcp);
        packet.emit(hdr.tcpopt);
        packet.emit(hdr.udp);
    }
}

// ****************************************************************************** //
// *******************************  M A I N  ************************************ //
// ****************************************************************************** //

XilinxPipeline(
    MyParser(), 
    MyProcessing(), 
    MyDeparser()
) main;

Hello everyone,

I have been trying to add an INT header to calculate one-way delay by adding the ingress timestamp of one-hop to a custom INT header in between the existing Ethernet and VLAN/IPv4/IPv6 headers. The P4 code to do that is shown above. I haven’t been using v1model architectures but using xsa.p4 instead, which is provided by Xilinx. The issue is, that the above code runs perfectly and does as it should in the behavioral model. But, when I load the bitstream onto the actual FPGAs, it either doesn’t work as it should or crashes the driver in itself.

Looking at the code, you might already know that I didn’t follow any conventional INT specification. Does anybody know what is wrong with my code? Or, any other insight will be very useful to me as well. I tried to be as simple as possible. I added the variable to carry the timestamp and a new protocol identifier for the next header which was supposed to be there after the Ethernet Header. In the next hop for the header removal part, I just parse the custom header, calculate whatever I need to, and finally set the custom header to be invalid which is not shown here because this code is just for adding the header.

Also, I observed that removing the .isValid() function from the code doesn’t make the NIC driver fail as well. What could be the issue?

INPUT:

000000 11 11 11 11 11 12 aa aa aa aa aa ab 08 00 45 03 
000010 00 4b 43 57 00 00 39 11 1d 1f 7c f6 cd 9c cc 93 
000020 0a 03 21 1e 5b 98 00 37 aa 9a 6b fa a4 95 d2 54 
000030 47 71 92 29 8b 0f 8d e7 e2 99 08 f0 13 0b ef 64 
000040 07 3b fe e0 d4 6a ad 3f 5b 3e fd 58 33 49 fc 8f 
000050 86 00 1c 4f 00 a0 d0 6d 70

OUTPUT:

000000 11 11 11 11 11 12 aa aa aa aa aa ab 19 96 01 23 
000010 45 67 89 ab cd ef 08 00 45 03 00 4b 43 57 00 00 
000020 39 11 1d 1f 7c f6 cd 9c cc 93 0a 03 21 1e 5b 98 
000030 00 37 aa 9a 6b fa a4 95 d2 54 47 71 92 29 8b 0f 
000040 8d e7 e2 99 08 f0 13 0b ef 64 07 3b fe e0 d4 6a 
000050 ad 3f 5b 3e fd 58 33 49 fc 8f 86 00 1c 4f 00 a0 
000060 d0 6d 70

Also, I have attached the output of the behavioral model for one of the test packets. It shows that it works as it should, by changing the protocol identifier of the Ethernet header to 0x1996 and the protocol identifier of the Custom Header to 0x0800, which is for an IPv4 header originally present after the Ethernet header. It also added the actual data in between as well.

It doesn’t work on the actual target though. I don’t understand why.

Could somebody please help me out? Hoping to hear from you soon.

Thanks and regards,
Sandeep Bal

Hello Sandeep,

First of all, please keep in mind that the majority of the people on this forum do not have the access to your target (even if they did, the information you provided so far would not be enough for them to reproduce your issue, since you didn’t attach the full source code of your program, table programming, etc.)

Therefore, it is essential that you describe the problem using as specific terms as possible and the terms you used (“either doesn’t work as it should or crashes the driver in itself”) are less then precise. How should it work? Which driver crashed? Nothing is clear here (you do mention some NIC driver later in the text, but it is still not clear what is its function in your test).

At the same time you correctly pointed out that your packets with the INT header attached will look somewhat unusual. To most standard software they will look like regular L2 (Ethernet) packets with a custom Ethertype (0x1996), nothing more. Definitely, no standard software will be able to parse the packet past the Ethernet header. Generally speaking, no reasonably written drivers should crash upon receiving such a packet, but many stacks will simply drop it. BTW, how do you know that the driver has crashed? For many systems such a crash would be fatal or nearly so.

It is also not clear how the v1model program that you ran on the behavioral model is related to the XSA architecture one. Or did you use the model that supports XSA (provided by the vendor)? If there is a discrepancy, it might make sense for you to reach out to them.

Going forward, it would be useful if you debug further so that you can at least ascertain if the packets egress the system in the first place. If they do, then try to capture them or use the counters (hopefully they are provided by the target) to double check. To capture such a packet you will need to use a tool, such as Wireshark and make sure that it puts the corresponding interface into the promiscuous mode (which it typically does by default).

Happy hacking,
Vladimir

Hello,

I understand what you’re saying. Sorry for the lack of the source code and a better explanation. I also understand that people in this forum might not have an extensive background in the target that I am using. What I was actually looking for is some insights that I might get on the P4 code itself, which, I understand might be a bit different depending on my target. But, it would still be helpful nonetheless. Also, I was wondering if the source codes in itself are adding the headers wrong or doing something that isn’t allowed in INT which might be causing some network issues.

Is it okay if I try to reformulate the problem better in this thread alongwith the source codes? Or would you suggest I should make a different post? I have two long source codes though. One adds the custom header in one of the nodes and the other removes the custom header in the next node. Posting them together for better explainability might be a bit long.

Hoping to hear from you soon.

Thanks and regards,
Sandeep Bal