 Hi everyone, good morning and welcome to our lightning talk session on Optimizing Cluster Workloads, Selium in Voile on DPU. So my name is Atakshi Mishra, I'm currently part of Marvel's Accelerator Solution team and very much interested in stuff like Kubernetes solutions, P4 programmable data planes, areas like CNI and load balancers. So we know that Selium has brought a significant change by leveraging the EBPF technology. So this proposal is actually to take this step a bit further by moving all the features and functionalities provided by Selium onto the data processing units or we call them DPU for the rest of the slides. So these are actually hardware, specialized hardware which are outside your servers. So modern DPUs are capable of efficiently handling all the layer 4 and layer 7 related processing like MTLS, transparent encryption, decryption, layer 4 load balancing and yeah, so it's like most of them which directly overlaps with the features provided by Selium. So that's why the first functionality we targeted was layer 7 processing which is done by Invoy today. So Selium moved Invoy from per pod site call model to per node site call model. So why not to move that onto the DPUs that is the out of server model. So we profiled a sample cluster to get the idea of the resource utilization by various components upon experimental stress test we saw Selium resource utilization going up to 35% Invoy is up to 42%. So while these are just synthetic test we would be happy to discuss if you guys have any profile data in your production environment which shows something similar kind of high utilization. So in the next diagram this is the first model which we tried initially and this was to deploy Selium and Invoy using the demons that mattered on the DPUs and we used the gateway API use case so that all your traffic will come to your DPUs and from there all the gateway related processing will happen on DPU with the help of Invoy deployed on DPU and from there it will go directly to the back end pods or the application containers. However after that we were able to come up with the complete offload architecture where your Selium agent, EBPF data path and the Invo proxy all can be deployed or I can say offload it to the data processing units. So this is the detailed diagram of the full primary network offload to DPU. So we have introduced some plugins to trans transparently offload all the components. No changes in the Kubernetes or pod spec has been done. So let's go to the diagram. As soon as your pod get deployed CRI will call the CNI agent like the normal fashion but this calling this connection will be intercepted by the CNI offload layer. Now this CNI offload layer we're going to take all the data from the CRI from the connection and we're going to allocate one user, one interface to the user application pod. Now it will going to send all the data to the interface which is just allocated to the plugin that was on the DPU. Now this plugin will going to send all the data to your Selium CNI in the same fashion the CRI does. After getting the data CNI will behave in the same way that it will going to allocate the other side of that connection to the your EBPF data path. We're going to attach that. So this connection you are seeing between the application pod and the EBPF data path is actually via virtual function pair. So we have a POC ready model for this architecture and we would love to discuss about this with the community. So our final idea is to transition all the common infrastructure workloads onto the DPUs. This will give you two direct benefits. First compute power of the DPUs, the hardware will be utilized and compute power of your servers will be totally freed up to run some to handle some additional application workloads. Now in future we would like to use the acceleration capabilities which are offered by modern DPUs so that the compute intensive portion of your these workloads will be directly offloaded to the hardware specialized hardware accelerators thereby increasing the overall cluster performance. And since DPUs are power efficient so the solution has the potential to reduce the overall power consumption of your Kubernetes cluster. Thank you.