Dear Community,
I am doing experiment on Fabric-Platform using bmv2 switches. I am calculating min, max and average enq_qdepth on every packet and patching the values on each 0.001 second onto the header and receving it on receiver side. The code is shown below :
control MyEgress(inout headers hdr,
inout metadata meta,
inout standard_metadata_t standard_metadata) {
register<bit<32>>(512) input_port_pkt_count;
apply {
//Find sum
bit<32> jemi;
input_port_pkt_count.read(jemi, (bit<32>) 0);
jemi = jemi + (bit<32>) standard_metadata.enq_qdepth;
input_port_pkt_count.write((bit<32>) 0, jemi);
//Counter
bit<32> counter1;
input_port_pkt_count.read(counter1, (bit<32>) 1);
counter1 = counter1 + 1;
input_port_pkt_count.write((bit<32>) 1, counter1);
//Find minimum
bit<32> jmin;
input_port_pkt_count.read(jmin, (bit<32>) 2);
if (((bit<32>)standard_metadata.enq_qdepth < (bit<32>)jmin)) {
jmin = (bit<32>)standard_metadata.enq_qdepth;
input_port_pkt_count.write((bit<32>) 2, jmin);
}
//input_port_pkt_count.write((bit<32>) 2, jmin);
//Find maximum
bit<32> jmax;
input_port_pkt_count.read(jmax, (bit<32>) 3);
if (((bit<32>)standard_metadata.enq_qdepth > (bit<32>)jmax)) {
jmax = (bit<32>)standard_metadata.enq_qdepth;
}
input_port_pkt_count.write((bit<32>) 3, jmax);
if ((hdr.ipv4.protocol != 0x01) && (hdr.ipv4.protocol != 0x6) && (hdr.ipv4.protocol!= 0x11))
{
hdr.my_meta.setValid();
input_port_pkt_count.read(jemi, (bit<32>) 0);
hdr.my_meta.enq_timestamp = jemi;
jemi = 0;
input_port_pkt_count.write((bit<32>) 0, jemi);
input_port_pkt_count.read(counter1, (bit<32>) 1);
hdr.my_meta.deq_timedelta = (bit<32>) counter1;
counter1= 0;
input_port_pkt_count.write((bit<32>) 1, counter1);
input_port_pkt_count.read(jmin, (bit<32>) 2);
hdr.my_meta.enq_qdepth = (bit<32>) jmin;
jmin = 100;
input_port_pkt_count.write((bit<32>) 2, jmin);
input_port_pkt_count.read(jmax, (bit<32>) 3);
hdr.my_meta.deq_qdepth = (bit<32>) jmax;
jmax = 0;
input_port_pkt_count.write((bit<32>) 3, jmax);
//bit<32> max_enq;
//input_port_pkt_count.read(max_enq, (bit<32>) 2);
//if (((bit<32>)standard_metadata.enq_qdepth > (bit<32>)max_enq)) {
//max_enq = (bit<32>)standard_metadata.enq_qdepth ;}
//input_port_pkt_count.write((bit<32>) 2, max_enq);
//hdr.my_meta.deq_qdepth = (bit<32>) max_enq;
}
}
}
The experiment result is as follows:
I am making congestion using iperf3 :
iperf3 -c 192.168.2.10 -u -b -1 -t 20
After pausing 10 seconds I am running the same command again.
On the graph, the part until 40 second, bmv2 switch is behaving as expected but on the 2nd congestion the max enq_qdepth is always below 20 while it is expected to be around 60s. What may be the issue?
I checked the deq_timedelta values with only average calculation and with min,max and average calculation. While calculating only average the max deq_timedelta value was 60, but while calculating min,max and average all together I was experiencing max deq_timedelta value of 1400 which is 23 times more than the first calculation. May the calculation time be effecting the queue buildup?
Below I am attaching deq_timedelta graph as well if it will help to resolve the issue?
The p4 program on github : https://github.com/nagmat1/Routing_enq_deq_depth/blob/main/switch/min_max.p4
I am limiting the bandwidth on the switch by :
sudo tc qdisc add dev enp7s0 root handle 1:0 netem delay 1ms
sudo tc qdisc add dev enp7s0 parent 1:1 handle 10: tbf rate 1gbit buffer 160000 limit 300000
UDP packets are send by more than 10Gbits/s and on the receiver side it is accepted by 995Mbits/s
Kind regards,
Nagmat