 Hello everyone, welcome to the Sillian project updates or as you can tell there's a lot of puns going on here so it's the latest bars. We've got a cast of thousands or six at least, people, friends from across the community talking about the latest things that are happening in the project. So somewhere there's a clicker but I can't see it so I'm just... ah there it is, brilliant. All right, so I'm just going to start by giving a quick little overview of what Sillium is and welcome you all. I'm going to guess most of you have heard of Sillium. How many of you are using it as a Kubernetes CNI? Quite a lot of you, excellent. Do we have any people who have been trying the service mesh in Sillium? Yeah, I see a few hands, good. How about Hubble for observability? Who's using Hubble? Yeah, a few hands and Tetragon for security. Any users? Excellent, good. So yeah, all these amazing powerful tools that are being built by the Sillium community based on EBPF and I imagine a lot of you know the CNI pretty well. It's a very full featured CNI, it's very high performance. I am not going to have time to walk through all these features but I will mention NAC Gateway because I believe there's a talk about that tomorrow afternoon by Daniel Bourkeman so that will be one not to miss. Service Mesh, we've been talking about Service Mesh this week at Service MeshCon and continuing to support the sidecar model through SDA for people who want that but also the sidecar free model that I think a lot of people were really excited about when we first announced it. That's stable, we're very close to having Gateway API capability now as well as the Kubernetes ingress and there's progress being made on the next-gen authorisation and authentication. So lots of activity there. Hubble, we're going to hear from our friends at Grafana about some of the amazing things that are happening in the Hubble observability world and Tetragon which is runtime security, observability, detection of events and even prevention and enforcement by being able to kill malicious processes from within the kernel. Super powerful, super exciting. So that's a very quick run over what Cillium covers. We don't have enough room on a slide to put all of the users. It says everybody in this room put themselves into the users file on GitHub. If you haven't, if you're not already listed on the users file, talk to Bill. You'll meet Bill later. He will help you get that listing so you can show off how great and how cool you are as a member of the community. Also, as you're going to hear more about, Cillium is powering pretty much all the clouds out there these days. So wherever you're running Kubernetes, there's a pretty good chance even if you're not managing Cillium yourself, there's a pretty good chance that underlying it, underlying it, there is Cillium. So with that, let me introduce the first of our cloud provider speakers for today's talk. I would love to welcome Purvi Desai from Google. Thank you all, and thank you, Liz. By the way, I'm very, very honoured to be here talking about what we are doing with Cillium and the Dublin V2. And this is really a continuation of some of the conversation that we had in Valencia. Just a quick show of hand, how many of you were in Valencia? Great, all right, cool. So starting with what we're going to cover today. So very quickly, before I go into this topic, let me start off by introducing myself, Purvi Desai. I'm director of engineering in Google Cloud Networking. And I'm going to cover today the Cillium and the Dublin V2 in two parts. First is I'm going to cover what exactly is a state. And that will also give you guys an explanation on why am I here, why am I here talking to all of you. And second is also about what we are doing in Cillium and the Dublin V2 largely towards community updates, focused on modularisation, communities conformance and also the multi-network work. With that, let's start. I wonder, okay, good. So, all of you already know, but a quick recap. I mean, once the Google open source communities in 2014 and then it went into GA 2015 along with the Google communities engine. At that point, large part of it was IP tables faced as most of you are aware. But from very early on, we were seeing the powerful paradigm that of EBPF and Cillium of exposing the programmable hooks from the next kernel and having the powerful data plane. So, in 2019 when we exposed, when we started delivering Anthos, GA Anthos is our platform for running workloads in customer data centres or multi-cloud. We saw the needs of a lot of features, flexibility and observability, which is where we then brought in the EBPF based data plane. And for us, it was very easy to decide. At that time, Cillium was the vibrant community and a powerful stack. So, based on Cillium, we introduced data plane V2 as our data plane of choice in 2020. And it has been a great journey for us. We have been now running data plane V2 on GKE. And not only that, we have made it generally available for Anthos and our newer platforms of Google distributed cloud, GDCH and GDC hosted. As a matter of fact, it's now a default on autopilot, which is our flagship offering of GKE. And it is also going to be now, our fleet is rapidly migrating and we're also migrating them automatically to data plane V2. Since then, we have launched a lot of new features on top of this. And I'm actually going to cover some of the benefits that our users have seen as well as we have seen. So, first is the important thing that almost all of you know, but I really wanted to highlight that one of the superpower, but which is largely hidden, is the developer-first networking model of Kubernetes. So, data plane V2 is actually a Kubernetes compliant data plane stack, CNI. And we actually worked with Selium Community in the beginning and we continue to focus on making it conformant. The second important aspect is the consistency of features. In the consistent experience, we are able to deliver on GKE, Anthos and GDC, at the same time, without lowering the bar to the lowest common denominator. It would be on GCP, for example, we have container native VPC, container native load balancers, Kubernetes native integrations, or there are VPC flow logs, which are completely GKE native. So, we wanted to make sure that when we are delivering the consistent experience across all these platforms, we are not compromising on the feature function of the environment. That is the consistent experience. We have been able to do a lot of customization in Selium and based on EBPF, which we will talk about, but since we don't have enough time today, we have a lot of good line-up to cover, I'm going to skip this. And then in terms of feature velocity, we have been able to release lots of new features that our users wanted and we have also upstreamed it, some of them being egress, that gateway improvements in scalability or in IPv6 support and recently, as recently, STP support for telcos. And last but not the least, here what we have is now ease of operation. Basically, we are able to upgrade our fleet without needing to upgrade sidecars or without needing to upgrade the kernel. So, all of that was a good news, but then as I had covered in Valencia, we have also been hearing feedback very closely from our customers. And that is that they love this opinionated model of our data plane V2 because that is how they get the guarantees of SLOs, they get the guarantees of performance and the quality. But at the same time, they also want to see, they also want to use many of the open source tools like Hubble or Detragon. And that is where we have been working since that last six months on modularisation. So, we see the power of end, basically where we have a vibrant ecosystem of networking features that our users can build and bring. This is a journey that I'm going to cover very quickly in the next few minutes. However, before that, I want to now start off by saying that we are very proud with what has happened in the last couple of years with EBPF and Cilium-based data plane. We are also very proud and happy that Google Cloud was the first one to adopt Cilium as our data plane foundation. And we have been receiving a lot of great feedback from all our users. And this also is something that we have to work on, which is modularity. So now let me quickly go on to what we have done with modularity. So, in the last six months, there's a lot of work in Cilium community in priming the entire data plane stack to make sure that what all can be modularised and how. And on top of it, we also now have the requirements document out, where at least from our perspective, as in the community perspective, we can align on what problems to solve. And then we can go on to how to solve it. So, that has been the progress that has happened. Google has been always committed to Kubernetes and now we are also committed to Cilium. And we have been very deeply focused on making sure that our data plane we do is fully Kubernetes compliant. So, as a natural extension, we have been continuing to build a bridge between the two. In that effect, we have done some recent upstreaming to, for example, SCTP or a Cilium endpoint slice enhancements. But at the end of the day, more needs to be done on conformance. So, on the conformance, what you all will not see is there are a lot of small, small things that also needs to be addressed. For example, network policy should work with IPv6 and it may be minor things for older kernels or Istio has to work along with data plane V2. And we cannot tell our customers to upgrade to latest and greatest kernels. So, we have been working very, very diligently on ensuring that we are able to support different versions of kernel. And there is more work that needs to be done even to make network policy fully Kubernetes compliant. That is only because some new features are coming in the network policy area in Kubernetes. So, the north star here is that we have a tight-loop coordination between Cilium and Kubernetes so that our users have the predictability on what they can experience. And last but not the least, which is a multi-network update, we had promised in Valencia that we'll give you an idea on what we are doing with multi-network. So, for us, the main reason for driving multi-network effort is to ensure that it is, network is Kubernetes native in some ways in order to really give the complete power of such things like network policy or native load balancing services or even scheduling IPAM. So, there are considerable amount of benefits by making the Kubernetes API and machinery aware of network. And that is exactly where we are driving. I'm pleased to announce, or pleased to update you all, actually, that in this QCON itself, there were the meetings on this area in the Contributor Summit. And the SIG has finalized, or I would say narrowed down, the use cases and requirements that we can address or they would like to address as a part of the first cut. The working group has been set up for SIG multi-networking in CNC of SIG. And we also have been having a very active conversation with Cilium community to understand and co-design how this gap would look like, how it can be built in Cilium. As you can see, there's a lot of good work that is happening here. And there are a lot of folks here in this room itself who are working on multi-network. If you guys want to raise your hands. Good, Maciej Antonio. And also, do feel free to reach out to me later on if you have any questions on this topic. And this has been a great journey so far. And I'm actually pleased to see that we have now a new, another cloud partner in this. So, I'm going to hand off to Liz. There we go. Thank you very much to Pervie. Round of applause to Pervie. And let's also welcome to the stage Chandan Agawal from Microsoft and Azir. Thank you, Liz, and thank you Pervie. Well, I'm Chandan. I'm Group Engineering Manager in Microsoft. And I own the Azure Container Networking Space for AKS and serverless offering. I'm really excited to be part of the Cilium community updates. She had good things. If you want to get started, I want to know how many people are using Azure AKS. Good, good. That's good. I think the next thing you're seeing is going to be exciting. So, we just announced that next version of Azure CNI will be powered by Cilium. And, yeah. It's a big undertaking, Ashley. Azure CNI has been built around giving the containers and the cloud native first class in Azure Virtual Network, giving the scale and performance that are required. With this, we are taking one step further, bringing the gaps we have between the cloud networking and the containerized world we have in community space, bringing the best of both together. With Azure CNI powered with Cilium, we have a very tight integration with Azure Network Stack. So, you don't have to do any chaining for VNet IPs or OLA IPs. You don't have to worry about replacing your things. Out-of-the-box, CNI and Cilium works together. Gives the EVP a better plane for your network security. Out-of-the-box, network visibility you are used to getting. And, of course, the elastic scale which Virtual Network for Azure can provide you. This has been a phenomenal thing to give it to the community back and then contributing back to see how we can bring the power of EVPF and the data plane back to the cloud native workload in APS. A little bit brief on the architecture, Ashley. So, I mentioned a little bit around the investment we have been doing and making sure that Azure Network Stack can handle the scale and performance. And that's been a key thing before we can say, hey, how we make the cloud native as good as going in Azure versus any other world. And we recently did announcement, the overlay mode, which is a first-class stuff in Azure Virtual Network, where you don't have unique IP address spaces. You want to be using the overlay overlapping at a specific bit between different Kubernetes clusters without any penalty of doing a double-encap in your guest VMs or reading your CPUs and everything. The same thing we are... So, you can do both your pure VNet mode in which you can have containers or pods, get the unique IPs from your VPC and Virtual Networks, or you can choose the overlay mode without any paying any de-cap penalties. But with this, we are integrating very well with the Selium data plane. So, both VNet and overlay can work to use the Selium EPPF data plane, to do the network security observability with no-cube proxy IP table business inside the guest communities. And that's a very, very big powerful thing to do in Azure today. And this is how you can try today. You can use the rest AP highs with one single command saying, hey, I want to enable the Selium data plane. And all you see is behind the scene, the magic has been done. IPAM is coming from Azure. First-class citizen of the Virtual Network is providing the direct routing for your pods. And the Selium is running inside the guest to give you the best of your data plane. We can have it. This is what I have today. Please be to see me afterwards if you have questions or follow-up. Thank you so much. Thank you so much, Shannon. That's really great news. Next up, our friend Richie from Grafana Labs is going to talk about Grafana and Selium integrations. This is going to be really good. Thank you. So when we announced this, quite a few people contacted me and were a little bit surprised. I don't think it's too surprising. Of course, culturally those two companies are really closely aligned. You see this talking to the people who are working on the other side. That's why we have this strategic partnership. That's why we chose to make this strategic investment and collaborate even more closely. Both Isovalent and Grafana Labs are in the exceedingly lucky position to have people working there who can literally choose where they want to work, and they chose those companies. And there is a reason. I strongly believe, and I'm very opinionated, but yes, this is my opinion. So I strongly believe that we actually are thought leaders and shaping this market and this space. And I do believe that we are both going to be market leaders. So I really think this is an insanely powerful collaboration. There's two things I want to focus on right now if the clicker works. Well, it just doesn't matter. So 20 years ago at university, I was working with Bro and with Berkeley packet filter. BPF for short. Then you have arguably two waves of EBPF. And finally, finally, EBPF is really front and centre for a lot of people. It always had huge potential, but it took some time to get there. But that also tells you why initially we are focusing on networks. Of course that is where historically the most work has been done. The most bit and polish has been invested in this precise area. So you get insane depth into your networking step and you get really, really deep visibility which previously you simply could not be getting. So I think this is pretty much the most impactful place where this partnership can start and we can actually work together and really extract new meaning, new data which previously was locked away from you. The other thing which I want to focus on is distributed tracing. Again, I'm strongly opinionated. So I do believe that for really distributed tracing at cloud native scale, ideally with close and tight coupling to how Promethys and such operate, there's only really one thing and that's Temple. But there's one thing about distributed traces. You get to see this path through your stack and how your stuff performs. What you do not get is you don't get large-scale statistical analysis of your complete stack and that's why this coupling between Temple and EBPF makes so much sense in my opinion. Of course not only can I follow what the things actually do, at the same time I can see, okay, this and that HTTP call took this and that much time to return. So maybe I need to do something out of my load balances. Those types of investigations were exceedingly hard previously and they should become a lot easier by jumping from, okay, this is how a specific thing worked to this really large analysis of what the underlying network is doing. And we don't have to use IP tables anymore. That's it. Next up, I'm sure many of you already know, Bill Mulligan, who works with me, is at Isovalent, so take it away, Bill. Cool. Thanks, Liz. Yeah. So the psyllium community, I think since I've joined Isovalent, it's been amazing to see the growth in both the psyllium and EBPF communities and I'd like to highlight some of those things that have been popping up across the community. So first, if you weren't at EBPF Summit, there are a lot of great talks about people using psyllium in production, there is also SMP Global using psyllium, trip.com and Form 3. If you haven't seen that, check out those talks and learn about how they're using psyllium in production today. On top of that, we've also had a lot of great psyllium user stories from around the community, from people that are using psyllium layer 4 load balancer to massively reduce their CPU usage to running psyllium to manage IoT hardware devices, data dog engineering, securing and scaling 10 trillion data points a day and also doing it in using psyllium to secure multi-tenant environments. These are just a few of the use cases that have come out of the community and it's amazing to see what we're all accomplishing together. If you want to get started with psyllium and you haven't done it before, there's a lot of resources out there for you, check out psyllium.io slash getting started, check out the docs or check out the psyllium slack. If you don't know where to go, please feel free to reach out to me on the psyllium slack. I'm happy to engage and interact and point to you in the right direction wherever you need to go. If you are already in the psyllium community and you need some help, once again, check out the slack. There's a lot of channels focusing on specific features like service mesh or tetragon. If you're on GitHub, feel free to either look through the issues or also open a new issue. We like to hear feedback from the community. We only get better by hearing what is and isn't working in production in real use cases and we want to have your feedback as the community. If you need training support or other things like that, feel free to check out psyllium.io slash enterprise. If you've been using psyllium and you have a great idea about where we should go next, on the roadmap, what we should prioritize, please feel free to open a CFP on GitHub. We want to hear what you think the project should include next. If you're interested in where the project is heading, we also have a roadmap in our documentation. For code contributions, there's the developer documentation. Once again, engaging with people on Slack. GitHub, you can do good first issues if you don't know where to start by whatever area of the project you're interested in contributing to. If you want to meet people, meet the maintainers, feel free to join our weekly developer call which is on Wednesday. That's all on the code side, on the non-code side. I'd love to have your support helping promote psyllium. We do have a bi-weekly newsletter that we send out around the community. We also have a Twitter account. Feel free to follow it and send things to me that are interesting to the psyllium community. I'm happy to either tweet about it or retweet it. If you have a story about psyllium or you need help telling it, I'm also happy to help you with that. We do have a forum on our website. If you need help writing a CFP for CubeCon, DevOps Day or any other conference, I can help you do that. If you want to have a presentation and have somebody look through it or go over it, we can do that too. If you need a speaker, I can connect you with the right person. If you need swag for your event or you need some more stickers, either find me right after this or send a request to me. We're happy to publish any blog posts or user stories from the community where we can get a retweet or add anything to our newsletter. Please feel free to reach out and we're happy to help promote everything that's happening in a psyllium community. What you can do is to send yourself to the user stock. We need that to show that there's a broad adoption across the community and we always like to hear about who's actually using our projects. Contact me to do a user story. This is a great way for people to learn about how psyllium is actually being used in production. Last thing is we just launched our user survey for this year so if you are using psyllium, please fill out the user survey. With that, I'll hand it over to Thomas Thanks a lot, Bill. Hello, I'm Thomas. I'm one of the creators of psyllium and I want to briefly cover what is next, what is coming in psyllium. But first of all, today actually marks a major milestone as I've just heard from John Don. Azure with AKS has adopted psyllium. That actually means all three major cloud providers are now using psyllium under the hood in their managed Kubernetes platform. As one of the creators of psyllium it's actually a very big moment. That means that psyllium is widely adopted and they've made at least some right choices. I think that is actually a yes. Quick outlook into psyllium 113. What are the amazing features that are coming? 113 is roughly planned for the end of the year. I will go through some of them. Gateway API support is coming. Code is already there. It's passing a conformance test. It's in a pull request. I've linked it up there. We're only completing the documentation to make the cut line for 113. But there is actually more because many of you have told us we really want layer 7 load balancing but in the easiest way possible. So we actually implemented layer 7 load balancing using just Kubernetes services with annotations. So you can go in and annotate your existing Kubernetes services at this IO psyllium LD protocol GRPC label or annotation and while you get GRPC load balancing with EVPF and Envoy but there is more. You can do this not only within the cluster you can obviously also add the existing cluster mesh annotation and do GRPC load balancing across multiple clusters. Very easy. You don't need to learn anything. Your app teams can continue using pure Kubernetes services and simply use annotations and psyllium both EVPF will do the magic below deck. psyllium next-gen mutual TLS we have been listening to you strongly and the feedback we have gotten is you want mutual authentication with psyllium network policy but please for any network protocol not limited to TCP. So we are implementing and there is details in the blog post that I have linked there an MTLS model that is able to integrate with SPIFI serve manager, Kubernetes secrets and other identity management platforms perform a mutual authentication in user space but keep the data path on existing network encryption methods using IP second wire guard and thus support any network protocol. The way you will be able to use this and this is already available as a pull request it actually integrates into a psyllium network policy and you see what is highlighted there you can actually mark existing allowed connection and require authentication for them. So this will not only allow or need the firewall that psyllium firewall to allow the traffic it will also cause and require a mutual authentication between the endpoints. SPIFI integration is coming pull request is listed here this will give us first of all identities certificates for psyllium identities but it will also give us a way to actually use the SPIFI IDs when we select policy in what paths they should apply to and even more important where they should allow to or from. Typically you are using Kubernetes labels or namespace names or other metadata with SPIFI IDs you can tie this to cryptographically secured SPIFI IDs. Big TCP support for those of you running large networks or doing video streaming this extension labels psyllium to power single TCP connections up to 100 gigabit if your network if your physical network is fast enough to actually handle this but if you have it this is definitely important as well as a very fancy new with replacement for those who do not know with this is like an implementation detail down below below deck this is essentially worth the virtual ethernet in what all the CNIs are using to connect the container network namespace with the host this traditionally introduced overhead with this new with replacement we're essentially allowing this namespace boundary transfer but at the native host performance so your containers from a networking perspective will be as fast as if they were running on the host itself but with the network namespace isolation so we're very excited for this kernel feature that we will upstream in the next couple of months and add psyllium support for it and Grafana dashboards we've heard about the Grafana integration we've added a lot of additional Grafana dashboards for the day two operational aspects of psyllium all the psyllium internals I've listed a couple of their BPF map sizes amount of traffic errors and so on helping you run psyllium safely and monitor psyllium as you operated all of this amazing features made us come to the conclusion that now is the right time to apply for graduation we've been an incubation project for about a year I think it was about a year ago when we became an incubation project so we figured what is wouldn't it be amazing if we applied for graduation during a cube con session well let's do it let's hope that the wi-fi got to work with me yes and there we go it's pull request number nine five two if you want to help us out feel free to go to this pull request leave a thumbs up leave a heart if you are a psyllium user and you haven't added yourself to the user's file we would love to hear from you as bill mentioned add yourself to the user's file if you want to publish a user's story go ahead to it we would love to publish it and promote your use of psyllium as well and we have the latest version of the psyllium user survey out there we would love to learn how we're using psyllium what to see next what should go into 114 and beyond and if that thanks a lot everybody let's get a giant round of applause for all of our wonderful speakers today pervy, Richie, Chandan, Bill and Thomas and obviously I mean we're pretty much at the end of time today but we are all here we have the psyllium booth downstairs so do drop by the booth meet some we have plenty of other people who are winners and work on psyllium who are here all week so do come by give us your feedback ask your questions we'd love to meet you thanks very much