 Thank you. Hello everyone. My name is Chandan. Um, I lead the pandian rig team in Azure responsible for both communities and so less product in Azure. And with me, I have the money. Thanks, Nathan. So I'm coming in one over and engineering manager in Microsoft working in as you see on a data plan, maintainer of as you see an eye and also an odd member of Celium. I'm really excited to be here and I will let you and then to start with the presentation. So today we want to talk about how we extending Celium beyond Linux, give a little bit of big dawn of what it means to bring power of Celium in VPF to Windows and hope you get excited about and contribute with us to make it forward. So before I start there, let's start what Celium journey in Azure has been. So Microsoft goal is to make sure the Celium runs best in Azure. The way we have done Celium integrated with Azure networking fabric stack is bring the power of VPF and Celium in the guest with seamless integration with the Azure platform thing we have in Azure. We started at this journey. We announced in November the Celium, the next generation of Azure CNI will be powered by Celium and also probably preview in November of 2022. Last year, we had a generality building tons of future on this, which actually brings both scale, reliability, performance, security, observability, which you'd need from Celium to it, which is best in class across the thing as we integrate very tightly and natively in Azure. Now we're taking one more step further. This talk give a break down of what we are working to bring this whole suite of things to Windows. So Windows, our love with Celium, we're extending it back to Windows so you can have a much more rich environments where you can deploy your application, have the same kind of stuff you're doing with Linux, have it Windows and have a much more seamless experience. So before we dive into it, I want to just give a brief architecture of how Windows Container Networking stack has been built. On the left side, we have a traditional Windows Container Networking. It uses what you call virtual switch and it has something extension of it, which you call the VFP or virtual filtering platform. It's a corner level driver, which actually does help you deploy ACLs and security policies, observe and transform the network. The Windows Container Networking stack has been built that containers should be as rich as the virtual machines. So whatever investment we build in our Hyper-V stack for VMs, we want to have the same parity with containers. And so it means we use much, much rich functionality, but it's heavy weight because it's built for VMs. But the new stack at Celium, we are making it a transformation or modern variation to accept. We want a lightweight way to extend the kernel, use the power of the VPF we have in Linux, bring it back to Windows and still give the flexibility, agility, what we can achieve without need to develop custom drivers, without need to maintain this whole virtual machine stack so we can have a much more transformation story. With that, I want to give determined money to talk about a little bit more about what it is. Thank you, Chandran. So EVPF for Windows is an open source project and it's MIT licensed open source project. And we wanted to embrace open source and wanted to extend the all open source project with Windows here. So the EVPF for Windows is like further leverages other open source projects here, prepare verifier. So it's like an EVPF verifier, which makes sure the EVPF programs are safe and secure to run in kernel. And EVPF is a user space. EVPF is another open source project and where Windows uses that for cheat compilation and interpreter for EVPF bytecode. And XTP for Windows is another separate open source project, which EVPF when we wanted to port Selen programs, we leveraged this XTP for Windows open source project to support the XTP hook in Windows. So we wanted to make sure the EVPF for Windows is as compatible with Linux. And with so the user can run the same EVPF program on Linux, and he can able to run the same EVPF program on Windows. There are a few minor exceptions, which we can able to solve with EVPF Windows here. So order to compare how Linux EVPF is comparison with Windows EVPF architecture here. So first, let's look at the Linux EVPF architecture. So if you look at if you look at Linux EVPF architecture, there is a main user develops the EVPF program. And then he uses the Selen tool chain to convert the EVPF program to EVPF bytecode. And then the user can use the existing libvpf libraries can be Selen BPF library or IOSL BPF library to load the EVPF bytecode into the kernel. And in that process, if you look at the the EVPF verifier runs in the kernel, make sure the kernel program and the EVPF program is like secure and safe to run, make sure like it doesn't access the memory you didn't get to or like it's it's like able to complete within the time. And then JIT compilation happens, which converts the bytecode to that native machine code here. And then the EVPF when the program gets executed, when the packet hits the TCP IP layer, the EVPF program executed, and then the user may able to get the events here. Now let's look at the Windows EVPF architecture. So again, the user have the EVPF program code, and there is EVPF driver tool chain. So it does more than Selen. So it converts the it uses Selen to first convert the EVPF program to the bytecode. And also, it converts it as it converts it to the native Windows driver code. It does the I mean, in Windows, since the the the kernel all the kernel drivers needs to be signed, the verification, if you see the Linux part, the verifier and the JIT compilation happens in kernel part. So in Windows, it's moved to the user space, because like the windows kernel drivers are expected expected to be signed. So it needs to convert the executable image which created in the Linux in the kernel, it's being created in the user space in Windows. So we can I'm going to cover more about the EVPF driver tool chain, the upcoming slide, but the EVPF driver tool chain converts the EVPF program to the native machine code. And then the user can use any existing BPF tool here, it can use either net-as-h tool or BPF tool to to use the BPF library library to load the the native machine code into the Windows kernel. And one other difference here is if you see there is an EVPF shim layer sitting in the kernel. So in Windows, Windows doesn't understand TC, who got XTP hook, which is natively supported in Linux kernel. So we need a abstraction layer here to convert the TC, BPF program or XTP in EDP program to the Windows kernel native types. For example, in TC, we have a SKBUFF structure. So Windows doesn't know about SKBUFF structure here. So this shim layer makes sure it converts the SKBUFF to Windows native or buffer type. Windows has something like net buffer list. So this SKBUFF example, for example, the SKBUFF is converted to the net the network buffer list by the shim layer. And if you see when the packet hits the layer two, it's a driver layer. So this will call this this EVPF shim, which is like a extension for XTP hook, TCP hook, it will call the XTP hook and then invoke the actual XTP program. And if a packet hits the TC IP layer, it calls the TC extension hook to invoke the actual TC EDP program here. So yeah, not just I wanted to cover the how the Windows EDPF compilation happens here. So Windows, as I said in previous slide, Windows kernel drivers needs to be signed. So for HVCI, like something called hypervisor code integrity, Windows is like more secure and wanted to make sure like all the kernel drivers are secure and signed, which is not a requirement in Linux. In Linux, you just directly, just directly JIT compiles. And then the byte code native code, it's not required to be signed, but in Windows, it's like a, it Windows expects the drivers to be signed here. So this means like Windows cannot use the existing JIT compilation, which actually does the compilation of native code in the corner. Instead, it uses something like a head of time compilation. So it happens everything in the user space. So this is the expansion of the EDPF tool chain, which you saw in the previous slide. So the user has the EDPF source code and uses the C LAN to convert to the L file. So this is where the Linux stops, right? In Linux, you just convert the L file and then you use the load system call to invoke the error file and then JIT compiler, but it extends in Windows. It all happens in the user space. So then use the prevail verifier to make sure that EDPF program is safe. And then there is a tool called BPF2C. So which converts the L file, the object file to the driver code with the EDPF registers and maps. And then uses the micro studio visual compiler to convert to actual executable image. And then you can use the Microsoft signing tool to sign the image here. And the developer doesn't need to worry about or like need to do all the steps individually. There is a Windows tool already existing in open source. They can just use the tool to convert directly from the bytecode to a native image here. So just we can take an example here how Windows EDPF compilation process goes through. You have a sample all over EDPF program. And then you use the existing C LAN tool chain to compile this EDPF code to a bytecode. And then uses a BPF2C tool to convert the bytecode to the register code here, driver code here. And then use the Microsoft studio visual compiler to convert the driver code to the executable image. And then you get the image to be signed. And then you load the program into the kernel. So now let's talk about how we ported the existing CLEM EDPF programs to Windows. There are some differences when compared to Linux and Windows. And let's see what are the major differences and how we solved it. And then extend that support to the Windows with the CME EDPF program here. So this is one major difference I wanted to talk about. CLEM uses as defines to dynamically pulse values from to the BPF programs. Because like for example the node config. It uses as defined node configs. For example, the device index for IP address. So all this process as defines to the BPF programs in CLEM. Because CLEM dynamically compiles the EDPF program at the runtime and then pauses this value. So as I said earlier, like Windows EDPF programs are pre-compiled in the user space. So this is something like Windows cannot rely on. This won't work in Windows to pass this value dynamically here. So we have to find a solution here to pass the dynamic values from the user space to the kernel space. So instead of using as defines like Windows uses BPF maps here to pass the values from user space to the kernel to the user space to the kernel space dynamically here. So this is something we abstracted from the user. So the user doesn't need to worry about it. Like if in case of Linux, it uses as defined and in case of Windows, it uses the BPF maps to load those values. So that will be a slight performance here because it needs to look up now the map to get the values but it may be offset by like the since we compile the since we compile the bytecode to native code in user space, we use the compiler optimization here to offset here to bring about the performance improved performance here. So one other are the challenges like some before this like Windows doesn't have a support for TC hook. So CLEM uses TC for load balancing and for data path routing and to support TC hook in Windows is another big challenge big difference here. So it is because like in Linux all these TC hook constructors are referred in Linux kernels all GPL licensed. So Windows cannot Windows cannot refer or header files or use the code which is GPL licensed, even if we have Windows's MAD license here. So what we did here we have to reverse engineer the code here. So we take the compiled CLEM object code here like the compiled CLEM L program and which is like a which is supported with the BPF type format and we use the existing open source BTF tool libBTF to convert the compiled CLEM L to generate the structure types whatever it's used in the compiled L. So using that structure types and using that generated header file we are able to support TC hook extension in Windows like so that's how the BPF Windows relies on those generated structure types to create this to make the TC hook support in Windows. So just want to see another differences in BPF types and maps. So HTTP hook has like different type in Linux and different type in Windows. For example some of the fields are like used in Linux are not used in Windows so we need to make changes to make it cross platform compact so those fields may not have those functions in the windows but it should all majorly cover the big functions here. I mean majorly should support the XTP here except for few minor functions. And one other thing like in Linux the difference between Linux and Windows in Linux we use network namespace to isolate the network namespace but in Windows it's called as network compartment and each network compartment is isolated and so we call it as like namespace ID in Linux in Windows it's referred as compartment ID and this BPF socket structure has this compartment ID as a field but in Linux you need to use a helper API. Some helper API to get the network namespace which can be directly referred in Windows so you don't need a helper API in the Linux in the Windows here and there's a difference in map type as well. Perf event array is something like used in Linux it's not supported in Windows and for Windows instead uses something like BPF map type ring buff so wherever the perf event array is defined Windows have this map type ring buff structure defined so bringing CDM agent to Windows so we are extending the Golang support on Windows today. Windows have a lot of debugging tool support and we wanted to integrate this debugging tool support to Golang as well. We wanted to extend that to the Golang and then integrate with Golang Windows so we are actively working on extending the Golang support on Windows so that to support this debugging APIs and also to support all libraries which are supported in Linux today and other major differences CDM agent is tightly coupled with Linux APIs today and there are a lot of Linux libraries imported directly for example netlink is used only in Linux and the equivalent for netlinks in Windows is IP helper API and so it's tightly coupled so there is a plan to build an abstraction layer so that the CDM agent can be like platform independent and can run on both Linux and Windows so as a start for it we have a CFP to rise I mean we are in the early stage here we raise the CFP to support cross-platform support for Linux and Windows yeah with that I will let Chandan to talk about the Celium Windows timelines thanks so many so as a product you will see that I love timelines right so we started Windows journey similarly with Celium we started 2022 we started to port with seeing that hey how can we transform Windows networking to this and then we start to demo with Alphor LB stuff and we have this continuously in this journey how we are going to make not just the LB but part-to-part connectivity in basic scenarios seamlessly working in Windows and we're targeting in July and we want to bring in the network policy support also by end of this year so hopefully we have a whole package of the basic functionality working end-to-end with observability and security and east-west traffic and and load balancer traffic coming together and Celium Windows and as the one what he was saying right the goal for us is to evolve the stack with community with other open-source project even if so we are with that principle embodied right we are using all of the tools we have up level be the varifier prevail be the UPPF be the XTP of Windows all of these and of course Celium itself so all of these references are equally important when we are leveraging this I think I want to say big thank you for for all of my friends and colleagues in the windows OSR which couldn't join in here today but they have been the main pillar who are driving this forward and of course the wonderful similar community the wonderful team we have an eyes overland and making sure that we can take this together to bring the power of Celium beyond Linux to Windows we can have a seamless experience of people who are running the same thing in Linux also the windows containers but that I say thank you any questions any questions thanks for the talk did you have any opportunity to compare the performance between the Linux and Windows implementation we do actually we are part of the goal if you look at this slide in the beginning when we're saying we have a VSwitch and VFP which was very heavy for Windows containers or containers in general right one of the motivation to move to the CPPF is that we can beat the path length we have in the corner drivers and the agility we can get with the lightweight thing so I think our goal is to have a same parity in and then those performance for data path as we have in Linux so yes we are we have I could share offline the tools and comparison we are doing and to share that I had a question about what kind of VPF programs can we write in the future with the VPF on Windows beyond Celium for example I'm thinking here can we do a K-Pro and how will we find the symbols for the K-Pro or what the function is what the arguments are given the Windows kernel is not open source. So K-Pro is currently non-supported in Windows we are actively engaging there to bring the K-Pro support to Windows but right now we support all the network packet hooks like DC, XTP and soft address and couple of other EPP programs but K-Pro we are still discussing and you would see that in future yeah but feel free to open any GitHub issues in EPP or for Windows it's open source if you have any questions or any future needs to be addressed in Windows we can take it up there and also we'll look for more open source contribution in EPP for Windows. Great thank you for your talk. Thank you everyone.