 Hello, everyone. Welcome to Cloud Native Live. We are diving to code behind Cloud Native. I'm Paulo Simões. I'm the NCF ambassador and developer evangelist from Oracle. And every week we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, I hope so, and they will answer your questions. Now, January is every Wednesday at 11 am ET. And this week we have a pleasure to receive our friend Chris Tunkins from Tigera. We will talk about Kylico, talk about EVPF and how to be charging ATS network with these both products, amazing products. Also, I should remember, this is an official live stream from CNCF and subject of CNCF code of conduct. Please, do not add anything to chat, to the chat of questions. It will be in violation to that code of conduct. Basically, please, be respectful of all of your fellow participants and presenters. That's all we hand over to my friend, Chris, to show to us, break the code for ATS with ABVF in Kylico. Hi, Chris. How are you doing? Hi, I'm very good. I'm near to London and it's unbelievably hot here today, so I have a fan on behind me. I hope you can't hear it too loud. But yeah, I'm good. So my name is Chris Tunkins. I work for Tigera and Project Kylico. I'm a developer advocate. So it's my job to get out in the community and understand community's requirements and help the community to understand our products and our open source tools. So should I start by telling you a bit about Project Kylico just to make sure people know about that? Yeah. Chris, we listen a lot about Kylico, listen a lot about ABVF, and so it's amazing to know about these projects. Please, show us a little bit about this. Great. I will. Thank you. So I just want to say, first of all, don't worry, there won't be too many slides. I don't like sitting through too many slides. So we only have, I think, six slides in total, including the one you're looking at. So Project Kylico is an open source networking and network security solution. It's a way to connect together your containers, your virtual machines, and your host-based workloads, and it implements best practices for Kube security with excellent performance, and it is running in over a million nodes in the cloud today. So it's battle-tested production-hardened code with full support for Kubernetes network policy, interoperability with non-Kubernetes workloads, and a really large active contributor community. So it's a really successful product, project and product. Oh, why is it showing that on there at the nine? Wrong screen. Okay, I'll do it this way. So we have a Slack channel, which I think will be mentioned in the chat later on, but if you want to get involved with us, come and talk to us in our Slack channel, we have over 6,000 members, over 150 contributors. But we're not just talking about Kylico today, we're talking specifically about the EBPF and how EBPF relates to Kylico. So do you want me to talk a little bit about EBPF first? Oh, yes, please, a little bit. Great, cool. I wonder if we should jump back just down to the videos, to our video feeds, take away the slides for a bit. But I made some notes to share with about what EBPF actually is, first of all. So forgetting about Kylico for a moment, if we want to get exceptional networking performance in a Linux node, one way to do that is to implement the code inside the Linux kernel, because obviously, if you put the code in the kernel, you can get really great performance. But that brings with it challenges. If you want to put code into the kernel, maybe you have to write a kernel module and you have to get that approved and get your PRs approved. And that can be quite challenging. So the Linux kernel, back in the 90s, it had the Barclay packet filter, BPF. It's not EBPF, but the original Barclay packet filter added. And really, it's a way to implement a safe, secure, lightweight virtual machine inside the Linux kernel. And it runs bytecode that can take advantage of a subset of features. So more recently than the original BPF, we got EBPF, which is the extended Barclay packet filter. That's much more recent. And it's dependent on a Linux 4.x kernel. And it's entirely restricted to Linux for the time being. We'll talk more about that later, I think. So you need a fairly recent kernel. But once you have that, you can run a safe, secure, lightweight virtual machine inside the kernel without changing any kernel source code. So you can have an event-driven program that is compiled to bytecode and then attached to hooks inside the kernel. And when those hooks are called, then the attached EBPF programs are executed. Does that make sense? Yeah, sure. Let me ask you, today we will talk about AKS, how you leverage the EBPF over AKS. Don't force. Just the AKS has this capability. No, no, no, no, no, no, no, no, no. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no. Yes. How can I adopt this? It's a recommendation. Yeah, I'm really glad you asked that question because no, it's not just for AKS. When we do, later on in this session, we'll do a demo. and we wanted to focus that demo on a particular platform so the demo is clear and everything, and there are subtle implementation differences between using Calico eBPF on the different platforms, but you can use it on Azure, you can use it on AKS, EKS, and many other ways. So no, we can use our eBPF data plane in many places. So once you have this code that you can run in the kernel, it obviously makes sense for certain use cases, right, because there's a limit to what you can achieve because you are only given certain helper calls by the kernel so that in order to keep you secure and make sure you don't do anything malicious, the kernel will only allow you to call certain helpers and those helpers are aimed towards doing networking, doing logging, firewalling, debugging, those kind of things. So this is a perfect fit for Calico because if we pivot over to talking about Calico rather than eBPF for a second, we're essentially a network component for your Kubernetes clusters and like most networking implementations, we're implemented as a control plane and data plane in the same way that if you go back to a router or a switch, you will have a control plane and a data plane. Calico is built with the same model. In fact, I should mention I did another talk. We did a Kubernetes security and observability summit and in that talk I did a talk called the importance of modularity and data planes and if you want to find out more about the data plane support in Calico, that talk would be useful. But for today, what you really need to know is that Calico uses a control plane and a data plane architecture and we have several options for the data plane. Now by separating out the control plane and the data plane, it allows us to separate out the functionality of those two roles. So the control plane's job is to manage the high-level view of the network. For example, maybe to run BGP demons and to run the routing protocols that have the whole holistic view of the network and the data plane's job is to forward the user traffic and to do so quickly. So the control plane is a complicated bit of software that needs to be complicated and the data plane should be minimal, fast, lean code. So for that reason, Calico was designed with modularity in mind from day one. It was designed, I mean I haven't been with Project Calico since day one but I've spoken to people who have been and they knew from day one that they wanted to have this clear separation between the control plane and the data plane components. And because it was designed with a modular separation and a clear interface between the layers, it made it very easy for Calico to implement multiple data planes. So the original data plane that Calico supported was the Linux IP tables data plane. We still support it and it's quite high performance, it's battle tested. We support a Windows host networking data plane and we support the Linux CBPF data plane. So that's because we wanted to take the advantages of Linux CBPF and apply them to our product without needing to throw away any of the hard work or implementation work that was done making the control plane stable, reusable and so on. So we have those three data planes, Linux IP tables, Windows host networking and Linux CBPF but I should mention also we have a fourth data plane which is VectorPacket processing which is another data plane which is amazing especially for high encryption performance. So but we won't talk about those other data planes today, I just wanted you to be aware that they exist and that that's the background for why we have this Linux CBPF data plane. Does that make sense? Yeah, sure. Yes. So the explanation was amazing. Thank you so much. Let's show the code. I should have said one more thing though before we do which is what are the advantages? So the advantages are performance, the Linux CBPF data plane is really fast and it uses less CPU but there is also another big advantage which I'm going to this is where I'm going to have to use some slides. Now I promise you I only have four slides so I hope it's okay. I'll jump across to here. So this is the first one. It's the data plane benchmark. Now this benchmark was not done on AKS. The reason it wasn't done on AKS was because we wanted to test 40 gig networking so we did it on bare metal servers but you can see that with a Jumbo Frames MTU you can see that the throughput is a little bit higher with the ePBF but with a normal internet MTU of 1440 you can see that the performance is dramatically higher. So I don't want to do too many slides so let's move on. I don't want to dwell on that for too long. The other one is this one and it's just the CPU usage. So you can see that the eBPF and this slide is specifically for AKS as you can see that the TCP pod to pod and pod to service throughput sorry CPU utilization doesn't change very much but you can see that the UDP pod to pod and UDP pod to service is dramatically lower CPU utilization which is great. So that's it for the benchmark slides but there is another advantage to this data plane which is pretty cool and I'll demo it in a moment that is that in a Kubernetes cluster I'm sure you know that the services are usually implemented by Kube Proxy so if you have services in your cluster it's Kube Proxy that implements those but if you replace Kube Proxy with an eBPF data plane you don't actually need to run Kube Proxy anymore so the service functionality that is offered usually by Kube Proxy can be can be offered instead by the data plane itself. So you get a latency reduction you get less moving parts less complexity and the most interesting one is that you get to preserve the source IP I'll demonstrate that in a minute. So this is how it looks without eBPF your external client comes into a Kubernetes node and they talk to a service and you can see that the you can see that the the Kubernetes node that is running Kube Proxy has to destination that and source that the traffic and it does that so that in step two it can forward the traffic onto the other node then step three the the pod that is responding the pod never gets to see this the original external client source IP and then the return traffic has to go back through the other node so this is a problem if you want to do audit logging maybe you want to capture the IP address of all the users and that happens in the in the code that runs on the pod so once you switch to eBPF then you get this it's very similar right but instead you can see that where the Kubernetes nodes are you can see that instead of Kube Proxy answering the service you can see that in step one the BPF program answers the service it does a destination NAT but not a source NAT and that means that when the service pod receives actually I don't like the terminology service pod here when that when the pod the pod that is serving the the content when that pod sees the traffic it sees the real source IP of the user so that's the other benefit so so just to recap and then that's the end of the slides but just to recap the reasons you would want the Linux eBPF data plan you get great performance lower CPU utilization and you get these benefits of being able to see the source IP so uh yeah so let's jump across and I'll do the demo right right cool very good oh amazing explanation thank you so much and no worries oh uh how is that is that readable or should I go oh increase a little bit please yeah there's one uh potentially one small box hold on oh it's small this is small that's strange hold on okay I'm in a little problem with my oh there we are I need to learn to use my computer properly okay that's better right uh that's better that's better we might have a small problem uh with that with that because I used um are you familiar with Askinima I used this tool called Askinima yeah I used this tool called Askinima to record um the the demo so that rather than having to wait while the slow parts run we get to see we get to see it uh running quicker um but it can be a little bit fiddly with the uh terminal size so we'll see how we go let's go let's right okay so so I did this I did this demo this morning so it's not running live now but it is very recent so the first thing I'm doing is uh using the Azure CLI and I'm turning on this uh feature flag um for the container service namespace called enable aks windows calico and this is just a uh a feature flag to tell um aks that we want to use uh windows calico and uh I believe it also changes the the deployment model it's just a step that needs to happen for this to work um now because I have already done that before it immediately says state registered but if you do this on your own cluster you'll find that it says state pending and then you can um you can run this other command as your feature list and this will tell us when when the command is finished registering so I do that now there we go so you can see um this is just me confirming that it's registered and then uh the last part is we just need to re-register the features we need to re-register them with uh with the provider okay so now we can actually start doing the real work of deploying an Azure cluster so the first thing I do is um I use the Azure CLI again and I create a service principle which is a um it's an identity for the uh an identity for the service to run as essentially um so once I've done that I've stored the service principle in a variable called sp and the reason I've done that is because um the output contains credentials so um I don't want to share those credentials with the internet right now oh I should say as well that these strings up here I've actually modified them so these are not the real strings so that if anyone malicious thinks they can jump in then these will not work so um we take the um we take the output and we grab the service principle ID and the service principle password and all we've done really is we've taken those two um variables from this variable so the next part is to actually create a resource group and the resource group is just somewhere for us to put our um our resources so you can see that we created a resource group called um live demo rg and we put that in um Canada east now we get to the interesting stuff so now we're actually creating the cluster so az aks create so we're creating an aks uh cluster and I because I was working from some old notes when I first did this this is proof that it's a real demo when I first did this um you can see that I specified Kubernetes 1.20.2 and you can see that it's saying 1.20.2 is no longer supported so that's no problem so I just um run az aks get versions and it returns you can see it returns the versions that are valid so instead of the original 1.20.2 I just switched it to 1.20.7 so it'll take a moment for this command to run uh you can see um what we're doing is we're specifying that we're creating a cluster we want to put it in this resource group uh we give the cluster a name we say we want two nodes in the cluster we specify the kubernetes version now something funny happened there when when it got edited I think it's because of the zoom like I mentioned on the terminal size but it is this actually I put 1.20.7 but you can see that the seven appeared down here for some reason um and then we specify the service print supply d and the client secret and the load balancer skew I'm gonna try yeah I think we'll stick with it we'll stick with this terminal size I think it's a maybe one smaller is that still readable yes true readable okay let's let's go with that then because then hopefully we'll get less problems with the the wrapping um now in real life when you do this it will take um maybe around seven or eight minutes but the version of because I'm running this um through a recording tool um it will be a bit quicker so we don't need to wait five minutes we should just it will take about one one minute 90 seconds something like that hopefully so there is something I should point out here which is at this point you can see that we're not specifying that the cluster is an ebpf cluster that's because at this point we are not creating an ebpf cluster we're creating a normal calico linux ip tables cluster what we'll do is in a minute we'll run a benchmark on it and then when we finish benchmarking it then we will convert the cluster to an ebpf cluster oh that's a sorry I don't want to interrupt but this is a good question you can transform from a normal version for a ebpf version you can do this yes yes oh yeah not only can you do this um but any existing flows should not be disrupted when you do this so if you have an existing tcp flow um it should remain using the old data plane until that flow terminates and when that tcp flow terminates then the new any new tcp flow will go on to the new data plane however um even though that's possible of course we've all supported networks in production whether I would do that in production maybe not um but but yes in theory uh that's possible so this is great because it means that we can give you the right data plane today but if if in three years time there's a new technology that is more suitable you can switch to a new data plane so while we finish talking about that you can see that uh the command completed so it's given us the um the json for our cluster so we're running the cluster now and like I said before I've I've changed all of these IDs so that they are not private and this is a public key so we don't need to worry so we're now running a new kubernetes cluster and this this cluster is running uh calico but it's not running ebpf yet so I back up my um I back up my kube kube config so I'm just copying my kube config to some up to some other location so that I don't lose it and now I can ask azure for the credentials so I ask azure for the credentials um for um for the uh cluster I I'm just realizing if I move my window up a tiny bit then we can get rid of that tiny uh banner that's blocking the you being able to see the bottom of the screen yeah that's better isn't it okay so um if we look back at what we've done here um I think you probably missed this command didn't you so we copied the uh my kube config away just to back it up then we uh asked azure for um the credentials for this cluster and now we can use kube cuts or get nodes and we can see our new nodes um and you can see that both being up for a short amount of time on the version we requested so now that we're at this point we have a cluster running um a cluster running the the calico linux ip tables data plane so let's run a quick benchmark so we're using this great tool called kates bench suite um and you can see that we just specify the client node and the uh server node but you can see I made a typo so we specify the client node and the server node and it deploys a pod on the client and the server and it checks the uh the bandwidth and so on so we'll just make wait a moment for that to finish when this finishes uh we won't look at the benchmark results straight away we'll just move straight on to the next part of the demo and then when we finish the demo we'll have two sets of benchmarks to compare one set from the ip tables data plane and one set from the ebpf data plane of course uh just to to help the our audience uh maybe someone don't know what is kmb could you just give a tip about this yeah this is a great tool um this is a really great tool so uh you can find it if you search github for um for kates bench suite this phrase here um kmb is um actually I assume it stands for kubernetes network benchmark it must do uh kubernetes network benchmark all you need to do is give it a client node and a source node and it will deploy a pod on the server and it will deploy a pod on the client and then it will run iperph uh it will detect the cpu and so on um like so you'll see it detects the cpu detects the kernel blah blah blah and then it does the tests and then it gives you a really cool uh bit of benchmarking like this uh pod to pod pod to service now you can see that because in this demo um we're running on servers that have gigabit nicks that's why we're not seeing that's why we're not seeing 10 gig or anything here uh we we're not running on big instances cool so um so so that's how we that's how we that's how we deploy so just to to wrap up where we are now that was the process for deploying um a linux ip tables um calico uh kubernetes cluster so if we jump on and we move straight on uh i'm going to show you on that part of it now do you remember in that that diagram i showed um the audience that uh with kubproxy we lose the ip address of the uh external client i just want to demonstrate that quickly before we move on so i deploy this um useful um tool called yalbank which is just assimilated um microservices deployment it's very simple and you can see that we're running a pretend database a pretend customer and to pretend summary so it's like a three you know three tier microservices thing so we just wait until that's running properly of course uh if someone wants to follow something similar that you did there is some place uh your github or something or tutorial that we can follow to to try it like yes yes absolutely so specifically for the azure case um there will be a blog post quite soon that will on the project calico blog which will cover pretty much the same steps we're doing here um the i'll just pause that for a second hold on um and also on the blog post you'll see there are lots of similar blogs which tell you how to do this on aws how to do this on other um other clouds yeah um cool so you can see that you can see that it's running now so we create a load balancer kubcustle apply and we just apply some load balancer config and then we wait a moment for an external ip and here we go we've got an external ip so uh now i can curl it i think i i think i remember i waited a bit of time because i know from experience that if you hit the load balancer really quickly then it's very slow to respond or it doesn't respond at all so i think i just waited a short amount of time here we go there we go so we get this fake response from this fake bank website but we don't really care about that all we really care about is seeing the seeing what we get in the logs so now we look at the logs for the customer node the excuse me the customer pod and you can see this is the important thing right because we're not running the ebpf data plane yet the the zero load balancer has done a source nap and the ip address that we're seeing here is an internal rfc 1918 nap uh ip address so this is bad if we're trying if we want to if we want to audit if we want to filter if we want to do some analytics okay so that's it so we've shown the performance without ebpf and we've shown um the the uh mat so now we can watch me do this one so now we're going to convert this cluster to ebpf this is where i guess interesting so the first thing we do um i think this step will be going away very soon or may have gone away already um but it's not doing any harm um but essentially what we're doing here is we're patching the um the installation resource uh and we're telling it that we want each node we want each node to select which interface to use based on which interface can reach google's public dns so i i have a strong feeling this isn't necessary anymore but actually you won't do any harm for this demo so so that's why i did that um now the next thing's quite interesting because we're about to we're about to convert this cluster to use ebpf now in order for um the calico component felix on each node to talk to um the kubernetes api it traditionally uses a service to do that and that service is managed by kubet proxy but of course kubet proxy is going away so we can't have um calico talking to the kubernetes api via a kubet proxy service because kubet proxy won't be here anymore so that's why we have to do this next change this next change is to find out first of all we find out from kubectl from a config map in kubesystem we check what the true url of the kubernetes api is and then we create some yaml and what this yaml that we're about to apply is going to do is uh it will tell the tigera operator which is the tool that deploys um calico that we want to talk directly to the kubernetes api rather than to kubet proxy i'm just wondering if something's gone wrong with the um tool or if i paused it by accident one sec i never mind so what we're going to have to do now you get to see me do something live um i'm going to use this tool called askinema timing i'm going to adjust the timing of the recording um i think something was wrong with the recording file so i'm hoping that by okay so this wasn't supposed to be part of my demo but what i've done here is i've adjusted the timing of the recording um so they should play back quicker let's give it another let's give it another go but we're we're we're repeating those last steps that we saw before there we go now it's working fine okay good so if you recall i said that we wanted to get um calico to talk directly to the api so we're we're applying a config map in the tigera operator namespace and it's telling the uh this config map is telling the tigera operator that deploys calico that it that it wants to see the endpoint here and it's basically just saying don't use the kubernetes uh the kube proxy service use the api directly so we do that then we wait 60 seconds because it takes a moment for the tigera operator to notice the change and then we delete the tigera operator pod and when it comes back you can see that the tigera operator pod restart restarted immediately and when it did that it picks up this new config now i i have a feeling as well that this has recently changed to um i have a feeling this uh that that the pod restart may not be necessary anymore um but i did it because i know it works um so i think it may be that after we apply this uh yaml here that there's no need for this restart but it but it also it does no harm so it's fine so once we've restarted that um we can see that we're running we're still running kube proxy right but we don't need it anymore so we can patch the demon set we patch the demon set which is uh operating in the kube system namespace and we say that the demon set for kube proxy we tell kube proxy that we don't want it to run unless the node is not calico but of course all the nodes are calico so essentially we're telling we're telling uh kube proxy that it shouldn't run on any nodes so you can see that kube proxy is not running on any nodes now and the last step is that we um kube got a patch we patched the phelix configuration um this is a custom resource and we turn on bpf um now uh in in a couple of weeks time i'll be doing a webinar about how we can do a deep dive into the ebpf packet flow but but today we're not doing that so i just want to show you the ebpf's running so i thought the easiest way to show you that ebpf is running now is if we look at the logs for one of the calico nodes and we grep for bpf then we get loads of bpf stuff so where where sorry where you can find your ideas the information about your workshop your deep dive um on uh the tigera website there is if you go to the tigera website there's an events section um it will be published on there soon um but actually i don't think it's published yet i think the event is mid i believe it's mid july but it will be it will be published quite soon but we have we have loads of great free events on there so um uh yeah i you know i encourage you to go and take a look and and there is a tiger tiger there is a certification right that's right actually we have yeah there two certifications now yeah um the uh we have a we have the level one certified calico operator which is a really great course um actually i did that course myself before i joined tigera and i really enjoyed it um and we also have a new aws certification um adep calico aws expert so yeah uh those are those are both useful so that's it so we've shown now that we've turned on um ebpf now we can run the same benchmark that we ran before it to be honest the only reason this is happening quicker is because i changed the speed of the recording but the point is that these results we'll we will compare these results in a minute to the other results that we took and then the last part that we want to show now is uh before you look at the results is if we go to we look at this one you can see this is quite funny actually um i came back when i did this when i recorded this demo this morning i came back to look at the logs before i hit the site again myself and you can see two bad guys on the internet have been trying it already um you can see they're trying to log in i don't know what this guy's doing but this this guy is definitely up to no good um so then you can see that if i run the curl again i get the i get the website again but now if i look at the logs you can see that you get my real public ip although actually i edited this public ip because i don't want to share my public ip but uh but the point is you get you get the public ip so that's it so so just to recap before we look at the just before we look at the um the benchmarks just to recap we showed how the performance is better we showed how one for the other and we showed how um you get the false ip preservation great great presentation and amazing information uh calico and ebpfr open source projects right uh uh how can how can i how can we uh not uh know more about the products and uh it's possible contributed what's the expertise we should have to to contribute for the project i know we know that we have many many different uh opportunities to contribute like a documentation or or maybe some such kind of training etc but uh how how uh what how you can contribute for the both projects so that's a really good question um so both uh if you go to um uh project calico.org um you will find that um there's a community there's a link on there to the community website and as with most open source projects um you can get involved at whatever level you're comfortable getting involved if you just want to submit a docs pr and help us to improve our documentation that would be amazing and appreciated um and a lot we have some good open source documentation but obviously we we would love some contribution there um we have community meetings and on our slack channel it's part of my part of my role is to talk to users and understand where they're encountering challenges with our um with our product because no product is perfect right so um so even if you just want to contribute by um coming to slack and sharing your experience that would be that would be useful uh and then if you have if you have expertise you know if you have deep technical expertise um in networking and um and go or uh or you want to contribute to epbf those are both uh you know calico and epbf are both uh public um open source projects so yeah it's all there um i thought we should look at uh we can actually compare those benchmarks quickly before we run out of time because we never actually do that all right let me just show it uh so if we pull up the epbf results no let's pull up these standards that's the point uh that is a uh a good point uh we we can see a better performance uh from the network within the both uh when you when you talk about a few uh uh implementation very very simple uh uh few microservices log traffic maybe you you'll be comfortable with uh the full implementation but when you go to a very large implementation with much many many microservice much conversational transactional transactions etc you can feel uh uh uh a huge advantage to adopt this this kind of chance uh this kind of patch using calico and epbf so chris from your for your experience uh uh we uh we are the the the biggest implementation or uh how the how was the challenges and and uh the results was uh um was good enough to justify use uh this kind of uh the calico and epbf do you have any number from the from the current implementations um i don't want to give you a number that isn't accurate and i actually don't know i actually don't know a particular cluster size um but i can find out that number um i think um someone was going to share our um our slack channel um and i will join the cncf slack channel for cloud native live in a minute so let me find out the answer to that one offline for you because i don't want to give you the wrong answer and i don't know but what i can tell you um is that um one of the advantages you get when you um if we switch back to that slide again one of the advantages you get when you replace coop proxy is actually a latency reduction and you can you see this caveat here it says most noticeable with many short lived latency sensitive applications so what's cool about this is that this latency reduction starts to be more and more impactful the bigger your cluster and the more um the more short lived connections you have so although you get these immediate benefits from coop proxy from taking away coop proxy excuse me um if you have a cluster that uh has loads of short lived sessions the sessions you will see loads of benefit so actually the benefit becomes even bigger with a more with a larger cluster um i was going to show you these as well so you can see that um these are the this is the standard data plane and you can see that this is a gigabit nick so you can see that it's doing 900 meg um tcp and 800 meg uh udp and you can see that the ebpf data plane because i changed tabs up here the ebpf data plane is doing 900 meg um tcp just the same and you can see it's now doing nearly 900 meg udp as well so the the increase the improvement here doesn't look as big as the slide but that's simply because this is a gigabit nick and you don't see the improvement so much um i could probably have done a demo on a 40 gig uh bare metal but but i don't want didn't want to pay for it so uh so here we are um and the other benefit you can see is um in terms of cpu utilization you can see that uh where is it you can see that here it's 15 percent here it's 18 percent so it's similar um but you can see that for udp it's 60 client cpu for 800 meg but with the new data plane oh amazing it's faster and it's using half as much cpu nearly yeah it's amazing this number amazing yeah also the same here 800 meg versus nearly 900 meg and the same thing 61 down to 32 percent really amazing really amazing that's good that's good oh chris it's amazing a very strategic presentation and the and the demo uh many things to learn and improve uh i hope you can participate in in the next deep dive uh because it's it's really amazing thank you so much we don't have we don't have more questions we have a question uh and uh i i want to really thank you so much for for this presentation today this live today yeah no worries at all that's fine i'll um i'll go and join the slack channel um your cncf slack channel cloud native live um and also people are welcome to join um to join us in the calica user slack channel or to or just look at my twitter or anything else um yeah so thank you thank you so much for taking the time thank you chris thank you thanks everyone to join us today in the last episode of cloud native live it was a great to have you chris with us talking about calico this amazing person with ebpf really amazing numbers half half use of cpu it's really amazing thank you so much with more performance of course uh we also really love it uh the interaction with uh we don't have many questions but uh uh oh yeah i just want that question we have one question here now uh where could you find more about performance studies of different startups you have probably the best place to look for that is the project calico blog um we are testing different scenarios um and lots of different platforms and we continue to publish um performance stats for different um for different uh clouds and platforms there so that's the best place to watch and not only that even if there are even if there are some particular results that you can't see there right now we're publishing new ones every so often and also um if there's a particular thing you'd like to see then come and tell us about it in slack and maybe we can maybe we can prioritize that one thank you chris so everyone thank you so much for joining us today and see you next weekend next week not next weekend thank you so much chris nice to meet you everyone take care bye bye take care