Running P4 on docker

Hi everybody.
I’m new with P4 and I’m wondering if it’s possible to run a P4 program inside a docker container. I tried the GitHub - cslev/p4-bmv2-docker: P4 BMV2 docker container and Docker bmv2 containers, but it doesn’t have instructions about how the architecture should be deployed.
I was trying a simple example with 3 hosts running in containers and a BMV2 container acting as the main gateway with a simple p4 code that drop all the traffic to one specific container, allowing the communication between the other two.
It is possible to do such scenario? Someone have some reference that could help me to understand how to implement this kind of solution? Thank you so much!

Hi @dnredson

you can do that, here the official docker hub repo of the BMV2 switch Docker.
here a command example:
sudo docker run --privileged --net=host --rm -it --name bmv2 -d docker_image_name simple _switch_grpc --device-id 1 -i 1@v1 -i 2@v2 -i 4@v3 -i 3@v4 -i 5@v5 --thrift-port 9090 -Ldebug --no-p4 -- --cpu-port 255 --grpc-server-add 0.0.0.0:50001

This flags are docker commands: sudo docker run:

  • --privileged : It enables Docker containers to access all devices located under the /dev directory on the host machine
  • --net=host : is a networking mode in which a Docker container shares its network namespace with the host machine .
  • --rm: The ‘docker rm’ command removes or deletes Docker containers when is stopped
    *-it: creating an interactive bash shell in the container
  • --name pick_a_docker_name : you can provide a meaningful identifier for your container
  • -d : run the container detached

Those are the classics flags that you use for running a bmv2 switch (in this case i’m running a grpc simple switch simple _switch_grpc --device-id 1 -i 1@v1 -i 2@v2 -i 4@v3 -i 3@v4 -i 5@v5 --thrift-port 9090 -Ldebug --no-p4 – --cpu-port 255 --grpc-server-add 0.0.0.0:50001`) like the one that you can see there GitHub - p4lang/behavioral-model: The reference P4 software switch.

Thank you so much for the response!
I used the suggested docker image and created 3 interfaces veth (veth0, veth1, veth2, veth3), one connected to the switch and the other 3 for each container.
After that, started the switch with the interfaces as sugested: -i 1@veth1 -i 2@veth2 -i 3@veth3 -i 4@veth0
Started the switch and for each entry, created the command as this example:
table_add MyIngress.ipv4_lpm ipv4_forward 10.0.0.3 => fa:f9:6b:05:4c:08 3

Also, on each docker container, I created a route pointing the bmv2 as the default gw, but it has seen snot work yet. There is any other documentation that treat the deployment using docker, detailing the network configuration? I created the network as host, but still not being able to make the bmv2 docker to run properly.
Does anyone ever tried something like that?

Hi can you provide and image of the topology?

Hi Davide,
I’m trying to create something like that:


Use the bmv2 switch to control the docker network. And then, try to do two experiments, the first is a simple firewall that blocks any traffic to or from 10.0.0.3 and the second is to clone packets to two different hosts. But this topology illustrated is not working yet.

Hi @dnredson

  1. Did tou try to sniff packets on the bmv2 interfaces
  2. What did you get if you run the simple _switch with the flag --log-console, this command allow you to see what is is going on in the pipeline.

Hi @DavideS ,
Thank you again for your availability and sorry for the late response.
I’m trying to create a sequence of p4 switches running on docker and created a code to set the environment:
#!/bin/bash

docker run -itd --name h1 --privileged -v shared:/codes --workdir /codes dnredson/hostup
docker run -itd --name h2 --privileged -v shared:/codes --workdir /codes dnredson/hostup
docker run -itd --name sw1 --network host --privileged -v shared:/codes --workdir /codes dnredson/p4d
docker run -itd --name sw2 --network host --privileged -v shared:/codes --workdir /codes dnredson/p4d
sudo ip link add veth1 type veth peer name veth2
sudo ip link add veth3 type veth peer name veth4
sudo ip link add veth5 type veth peer name veth6
PID1=$(docker inspect -f ‘{{.State.Pid}}’ h1)
PID2=$(docker inspect -f ‘{{.State.Pid}}’ sw1)
PID3=$(docker inspect -f ‘{{.State.Pid}}’ sw2)
PID4=$(docker inspect -f ‘{{.State.Pid}}’ h2)
sudo ip link set veth1 netns $PID1
sudo ip link set veth2 netns $PID2
sudo ip link set veth3 netns $PID2
sudo ip link set veth4 netns $PID3
sudo ip link set veth5 netns $PID3
sudo ip link set veth6 netns $PID4
sudo nsenter -t $PID1 -n ip addr add 10.0.1.2/24 dev veth1
sudo nsenter -t $PID1 -n ip link set dev veth1 address 00:00:00:00:01:02
sudo nsenter -t $PID1 -n ip link set veth1 up
sudo nsenter -t $PID2 -n ip addr add 10.0.1.1/24 dev veth2
sudo nsenter -t $PID2 -n ip link set dev veth2 address 00:00:00:00:01:01
sudo nsenter -t $PID2 -n ip link set veth2 up
sudo nsenter -t $PID2 -n ip addr add 10.0.2.1/24 dev veth3
sudo nsenter -t $PID2 -n ip link set dev veth3 address 00:00:00:00:02:01
sudo nsenter -t $PID2 -n ip link set veth3 up
sudo nsenter -t $PID3 -n ip addr add 10.0.2.2/24 dev veth4
sudo nsenter -t $PID3 -n ip link set dev veth4 address 00:00:00:00:02:02
sudo nsenter -t $PID3 -n ip link set veth4 up
sudo nsenter -t $PID3 -n ip addr add 10.0.3.1/24 dev veth5
sudo nsenter -t $PID3 -n ip link set dev veth5 address 00:00:00:00:03:01
sudo nsenter -t $PID3 -n ip link set veth5 up
sudo nsenter -t $PID4 -n ip addr add 10.0.3.2/24 dev veth6
sudo nsenter -t $PID4 -n ip link set dev veth6 address 00:00:00:00:03:02
sudo nsenter -t $PID4 -n ip link set veth6 up
sudo ip link set veth2 promisc on
sudo ip link set veth3 promisc on
sudo ip link set veth4 promisc on
sudo ip link set veth5 promisc on
docker exec h1 ip link set veth1 promisc on
docker exec h2 ip link set veth6 promisc on
docker exec h1 route add -net 10.0.3.2 netmask 255.255.255.255 gw 10.0.1.1
docker exec h2 route add -net 10.0.1.2 netmask 255.255.255.255 gw 10.0.3.1

docker exec sw1 sh -c ‘nohup simple_switch --thrift-port 50001 -i 1@veth2 -i 2@veth3 standard.json &’
docker exec sw2 sh -c ‘nohup simple_switch --thrift-port 50002 -i 1@veth4 -i 2@veth5 standard.json &’
docker exec sw1 sh -c ‘echo “table_add MyIngress.ipv4_lpm ipv4_forward 10.0.1.2 => 00:00:00:00:01:02 1” | simple_switch_CLI --thrift-port 50001’
docker exec sw1 sh -c ‘echo “table_add MyIngress.ipv4_lpm ipv4_forward 10.0.3.2 => 00:00:00:00:03:02 2” | simple_switch_CLI --thrift-port 50001’

docker exec sw2 sh -c ‘echo “table_add MyIngress.ipv4_lpm ipv4_forward 10.0.1.2 => 00:00:00:00:01:02 1” | simple_switch_CLI --thrift-port 50002’

docker exec sw2 sh -c ‘echo “table_add MyIngress.ipv4_lpm ipv4_forward 10.0.3.2 => 00:00:00:00:03:02 2” | simple_switch_CLI --thrift-port 50002’

The idea is to create something like this image:
image

I’m running a sniffer in every veth, and the traffic is being forwarded, but it seems the packages are being duplicated, seens like the OS is forwarding one and the p4 switch is forwarding another. I tried everything to stop this behavior and make it like only the traffic passing by the p4 switches reach the host2. Furthermore, I want to calculate the delay needed by the packet to pass through the p4 switches with a custom protocol. Anyone have some suggestion or tried this configuration before?

Hi @dnredson,

  1. In order to calculate the delay needed by the packet to pass through the p4 you can draw inspiration from the In-Band Telemetry ‘standard’; for more details, take a look at postcard telemetry.

  2. Sorry, but I don’t know why this behavior is occurring. In my opinion, the best approach is to simplify the testbed and do a deeper investigation.