Queue depth of different priority queues

Hello everyone,

I’m working on a project which implements priority queues with the MRI exercise. I use simple_switch.
This is the topology:
topology

First, I did some experiments on the original MRI exercise (i.e single queue). I try to fill up the queue without using iperf flow to create congestion, I mean to create congestion by the hosts themselves. I send a flow of 100 packets from h1 & h5 at the same time to h3 with no interval between packets (i.e sleep(0)) by send.py program.

Without modifying the queue depth and queue rate → Queue is not filled up.
Configure queue depth and queue rate at s1:
image
The queue is filled up:
image


Next, I implement priority queues on the original MRI code to see if I can gather information of different queues. First, I use a single table swtraceThis is the .p4 code:
/////////My Ingress///////////
(...)
  apply {
        if (hdr.ipv4.isValid()) {
            ipv4_lpm.apply();

            if (hdr.ipv4.srcAddr == 0x0a000101){
                standard_metadata.priority = (bit<3>)7;
            }
            if (hdr.ipv4.srcAddr == 0x0a000105){
                standard_metadata.priority = (bit<3>)1;
            }
(...)

control MyEgress(inout headers hdr,
                 inout metadata meta,
                 inout standard_metadata_t standard_metadata) {

     action add_swtrace(switchID_t swid) {
         hdr.mri.count = hdr.mri.count + 1;
         hdr.swtraces.push_front(1);
         //hs.push_front(int count): shifts hs “right” by count.
         // According to the P4_16 spec, pushed elements are invalid, so we need
         // to call setValid(). Older bmv2 versions would mark the new header(s)
         // valid automatically (P4_14 behavior), but starting with version 1.11,
         // bmv2 conforms with the P4_16 spec.
         hdr.swtraces[0].setValid();
         hdr.swtraces[0].swid = swid;
         hdr.swtraces[0].qdepth = (qdepth_t)standard_metadata.deq_qdepth;
         hdr.swtraces[0].timedelta = (timedelta_t)standard_metadata.deq_timedelta;

         hdr.ipv4.ihl = hdr.ipv4.ihl + 3; //swid + qdepth + timedelta
         hdr.ipv4_option.optionLength = hdr.ipv4_option.optionLength + 12;
         hdr.ipv4.totalLen = hdr.ipv4.totalLen + 12;
     }

table swtrace {
        actions = {
            add_swtrace;
            NoAction;
        }
        default_action = NoAction();
    }

    apply {
        if (hdr.mri.isValid()) {
            hdr.ipv4.tos = (bit<8>)standard_metadata.qid;
            swtrace.apply();

I try to create congestion from hosts like I did earlier (with similar configured queue depth and queue rate) but it failed to do that. Queues from neither h1 nor h5 are filled up and always be 0. As a result, priority feature is not triggered, I mean packets from h1 are not received first then packets from h5 at h3.
Result from h3:

I modify the queues parameter:

set_queue_depth [number] [egress port] [priority]
set_queue_depth 64 4 7
set_queue_rate 30 4 7
set_queue_depth 64 4 1
set_queue_rate 30 4 1

But the result is still the same.
So the first question I have is:

1. Is this the effect of implementing priority queues?

Therefore, I have to create congestion by using iperf to send UDP flow from h2 to h4

iperf -c 10.0.2.4 -u -b 3M -t 7 -i 1

then send flows from h1 & h5 at the same time.
Queues’ depth now are filled up but they’re quite high, and the queue delay is not increase when the queue is filled up. Again, I can’t see the effect of priority feature.
Result from h3:


This time, I use multiple swtrace tables to match with each qid, they're similar to each other, they just have different names.
action add_swtrace_7(switchID_t swid) {
        hdr.mri.count = hdr.mri.count + 1;
        hdr.swtraces.push_front(1);
        //hs.push_front(int count): shifts hs “right” by count.
        // According to the P4_16 spec, pushed elements are invalid, so we need
        // to call setValid(). Older bmv2 versions would mark the new header(s)
        // valid automatically (P4_14 behavior), but starting with version 1.11,
        // bmv2 conforms with the P4_16 spec.
        hdr.swtraces[0].setValid();
        hdr.swtraces[0].swid = swid;
        hdr.swtraces[0].qdepth = (qdepth_t)standard_metadata.deq_qdepth;
        hdr.swtraces[0].timedelta = (timedelta_t)standard_metadata.deq_timedelta;

        hdr.ipv4.ihl = hdr.ipv4.ihl + 3; //swid + qdepth + timedelta
        hdr.ipv4_option.optionLength = hdr.ipv4_option.optionLength + 12;
        hdr.ipv4.totalLen = hdr.ipv4.totalLen + 12;
    }
action add_swtrace_1(switchID_t swid) {
 (...)
}
table swtrace_7 {
        actions = {
            add_swtrace_7;
            NoAction;
        }
        default_action = NoAction();
    }

    table swtrace_1 {
        actions = {
            add_swtrace_1;
            NoAction;
        }
        default_action = NoAction();
    }
apply {

        if (hdr.mri.isValid()) {
            hdr.ipv4.tos = (bit<8>)standard_metadata.qid;
            // swtrace.apply();
            if (hdr.ipv4.tos == 7) {
              swtrace_7.apply();
            }
            if (hdr.ipv4.tos == 1) {
              swtrace_1.apply();
            }
        }

The result is still the same, the queue depth is almost queued up to its maximum configured value. No effect of priority features, h3 still get h5 first. So several questions raised from this problem:

2. Is it possible to collect the queue depth of different priority queues?

3. If priority feature is not there, why doesn’t the queuing delay increase when the queue is filled up most of the time?

4. Is the logic I provided above to get queue depth of different priority queue separately correct, is there any misconception of my approach?

Full version of the code: Online Text Editor - Create, Edit, Share and Save Text Files


A very long post! Thank you for reading till here. If you need any clarification about what I'm doing, feel free to ask me. I would greatly appreciate any suggestion or related information from you. Have a good day!