Implementing neural networks into P4 switches

Hello, I am facing problems while trying to implement Binarized neural Network(BNN) in P4 data plane, Initially I have implemented the model architecture and training in python and then extracted the model weights to implement the same in P4 by replicating the mathematical operations that happen in BNN, but the weights were floating point and hence cannot be implemented in P4. I have taken reference from these papers [1][GitHub - nec-research/n3ic-nsdi22] and [2][Line-Speed and Scalable Intrusion Detection at the Network Edge via Federated Learning | IEEE Conference Publication | IEEE Xplore] but haven’t clearly understood how they have achieved this, They have mentioned that they achieved the integration of BNN into data plane by simply using operations like pop count and bit wise operations, but to implement these operations the base weights need to be integers and not floating points.

Thank you in advance for helping me out.

I have not read the sources you list before, but your question reminds me of a tutorial-style article I wrote on using fixed-point operations in P4 for approximating some operations that are more typically done using floating-point operations on general purpose CPUs. It should not take long to read and understand, as its techniques are fairly basic, but it might give you some clues: p4-guide/floating-point-operations.md at master · jafingerhut/p4-guide · GitHub