 All right, I think we're just gonna get started now. So thank you everyone for coming to the last day. I know it's really hard with KubeCon being so long and being in and out of sessions, but it's a lot of work. So, you know, thanks for coming out to my talk. So my name is Christine. I work at Isovalent. And today we're gonna be talking about Cillium as a project. So I've been on the Cillium booth for the past few days. And I've seen a lot of the questions, not just at this KubeCon, but previous ones as well. And a lot of questions around Cillium. So I hope to dispel that. So to heads up, this is a beginner's talk. I know that the CNCF has a large ecosystem, so hopefully this will just shed some light. I've heard a lot of people saying, oh, I've heard so much about Cillium as a CNI, but like, what does it do really? So hopefully it can break it down a little bit. Also, I said that the slides are uploaded to Sketch. I think I have like three accounts on Sketch. So I will find someone and be able to upload the PDF after I really apologize beforehand. I was hoping that you guys could follow along and make notes, but tech is hard. All right, so today's goals. I would like to talk about CNIs, break it down a little bit. What is it, what does it do? And then I'm gonna talk about service meshes because that's a whole other can of worms. What is it? And also, what does that do? And then lastly, where does Cillium fit into all of this? So apologies if this seems repetitive to you or if you've already know this and it's trivial. I won't be offended if you just like stand up and leave. That's okay, but I'm just gonna go over the CNIs because sometimes people don't know what a CNI is and there's a lot of acronyms floating around in the CNCF and I know when I first joined, I was like, I don't know what any of those means and people like to throw buzzwords around. So hopefully this can break it down for a little bit. So what is a CNI? So a CNI stands for container network interface, CNI. And it's basically a set of specs and libraries that enable you to write network plugins for containers to communicate with each other. So network connectivity. But when you say CNI like, oh, I'm using a CNI in my plugin, oh sorry, a CNI in my cluster, you're actually referring to a plugin. So there are various plugins in the ecosystem. So this is the GitHub link to CNI here and it lists all the third party implementations of it. So there's bridge, calico, cilium, and you can try and test them out if you want to. So what does it look like in your cluster? Each of your working nodes will be essentially connected with the help of the CNI that you choose. So it allows for your nodes on your cluster to connect to each other. And this is kind of what it looks like. It's like a very high level rough overview, but the container or group of pods there on the top left happens to be what you've deployed to your nodes. And the flow that happens is that your CRI plugin, which is container runtime interface, so another acronym for you to learn, will make a call to the CNI plugin that you choose. So for example, cilium bridge and using the specs of that CNI will configure the container to be able to connect to your network. CNIs are not just for Kubernetes though. They're agnostic, so you can use them on MISOs and it's a standard, so I want to emphasize that. But when people talk about CNIs here, they kind of just mean CNI plugin, but colloquial acronyms. And CNIs generally allow you to configure network pod to pod connectivity. So this means how a pod is allowed to communicate with other network entities, meaning you require it for your Kubernetes cluster. And then there's usually some way to support network policies at L4 and L3. So network policies allow you to deny or allow certain traffic depending on IP address and port level. And pods can be identified, who they can communicate with, with these identifiers on the screen. So pods, namespaces, IP blocks, and they can be used individually or in conjunction. Okay, so not too bad so far hopefully, but now we'll go over Cilium as a CNI. So there's probably a chance that your container already uses Cilium because it's offered on many cloud providers. So for example, GKE has Data Plane V2, which uses Cilium underneath the hood. So as a, I guess, little side note here, this is not a talk about EVPF. I'm sure there's a lot of talks residing online that go more into the guts of the Linux kernel. I am not here to talk about that today, but Cilium, what it does is that it uses EVPF, Extended Berkeley Packet Filter, another acronym to configure your network. So what I like to say though at the booth is like, Cilium, it uses EVPF underneath, so then you don't have to actually go into the guts of the Linux kernel. You know, it abstracts away for you all the complex networking tooling there. So when you have a cluster with Cilium installed, each node will have a Cilium agent running on it as a demon set. So at a high level, the agent, the Cilium agent, accepts configurations from the CUBE API server that describes networking, serve load balancing, network policies known as Cilium network policies, and visibility and monitoring requirements because Cilium is a lot and it uses the agent as kind of like the de facto management on your node. So it manages the EVPF programs that reside on the kernel and uses to control all network access there. All right, so Cilium allows you to create L3, layer three network policies based on the following, so end points, services, entities, IP, ciders. The turn entities here are used to describe remote peers which can be categorized without knowing their IP addresses. So Cilium also has this concept of Cilium IDs where it kind of groups pods together or workloads together, so it's a way to enforce security at a language that Cilium understands. And at the L4 or layer four level, you can have network policies on your ingress and egress and also on ports. All right, not too bad. So here is some YAML, I'm so sorry that I have to show YAML in the morning and no one likes to look at code and YAML for sure, but I will not torture you guys in watching me struggle with writing in a terminal because that's kind of rough to watch in the morning as well. But here is a network policy that uses end point selectors and two ports. So here you can see that we're matching on the org empire with the class desks are and then that from end points that match the org empire, you're allowed to land or interact with ports 80. So this is kind of another screenshot of more output which is, you know, I tried making it nice colored so, you know, it's not as harsh, but you can see that the tie fighter is allowed to land in the empire and then the X-wing, of course, they shouldn't land on the empire desktars so it's not allowed. And you can see the dropped packets in the red. All right, we made it through C&I. Take a deep breath. Now we'll jump right into service meshes which is I think something else too. So the CNCF ecosystem. I remember when I first started learning service meshes, I was like, there's another thing I have to learn and you can already tell by this, the landscape. This has been used in a lot of jokes but it is a complex ecosystem and I know that it can be very daunting the first time you look at all the projects and yeah. But service meshes came from the rise of distributed application and microservices so a lot of teams and companies, they saw that as they moved to a microservice architecture they were doing a lot of duplicated work of, you know, having tooling inside their application pods and then also there was like, oh, we need observability at the application level as well. So security requirements and all this changes left developers feeling a little bit, you know, plug and chug of doing the same thing. So even though people were completing the same work, there was not a lot of, I guess, the same tooling and implementation. So what that meant was that instead of having repeated work, we have a dedicated infrastructure. So pulling common tools out such as traffic, observability and other tooling security, you have now a standalone infrastructure piece. And so service meshes I think generally, they tend to have these three pillars of traffic management, observability and security. So like if you search up any service mesh, they probably have one of these use cases on the website where all three. And so what it kind of looks like is traffic management-wise, you want routing manipulation, maybe some header rewrites, load balancing, observability, a lot of people need metrics to show to their PMs or higher level ups that don't want to look at code either. And sometimes that means a nice UI display. So for example, I think Lingardee has their own UI that they roll out and Kali is for Istio and then Sillim also has Hubble. And then lastly, security. And this is kind of like a broad area, but some form of identity, MTLS, authentication, authorization and encryption. So you know that your traffic is safe. So where does the kind of overlap happen? So I like to say that SCNI cares about the networking while your service mesh layer kind of just cares about the applications being applications. The service mesh layer also kind of just assumes that you've already done the networking portion of it. So it lets your services be services to interact with each other. And it abstracts away the need to know about the network for your cluster. And traditionally, CNIs operate at the L3, L4 layer while service meshes do the L7 layer. And okay, I'm also not here to tell you which service mesh you should use or also if you need a service mesh because not everyone needs a service mesh. It's a lot of work. I'm no expert on your infrastructure layer, but and I also don't know what your teams needs. Our resources are available. So they're all different, make choices. I of course work for Isovalent and on the Sillium team. So, you know, I am biased, but I'm not here to tell you to use that. They all have their complexity and their pros and cons. All right, so Sillium as a service mesh. So recall that Sillium with a, sorry, a Kubernetes cluster with Sillium has a Sillium agent installed and running on each node. So this is again what the node looks like and what the Sillium agent uses underneath the hood is Envoy. And so Envoy is kind of this well-known also CNCF project that is kind of been implemented in a lot of service meshes and that's running on each node. So if for example, if some reason EBPF can't handle your request, it goes through Envoy on your Sillium agent. So that's the L7 part. But you're probably asking yourself, how is this happening? So Sillium has these custom CRDs which are Sillium Envoy configs and Sillium cluster wide Envoy configs. And basically it's a stripped down version of Envoy and what you're able to use. There is a link at the bottom there if you're curious to know more about what's implemented and what's not. And just a fair warning, like I wanna be upfront with you because it is more lightweight and stuff like that. If you have conflicting cluster Sillium Envoy configs and you apply it with Kube-Cuttle, it might conflict with your already previously existing Sillium Envoy config. So that's just a heads up. I don't want you to run into a successful feedback and then have to debug. But Sillium also has Hubble and really good observability log. So if you do run into that, you will probably see really quickly and rapidly why it's failing. The Sillium agent itself is also used in Sillium's Ingress implementation and also the Gateway API implementation. But that is a whole other talk of North, South discussions that I don't think we'll have time for today. Yes, so here is another nice YAML file but this one is in Sillium Envoy config and I just wanted to show you that it's possible to use load balancing. So for example, this one, it's splitting traffic 50-50 between. And here is the output flows using Hubble. So you can see that there is the two proxy line there where traffic is forwarded. So you can know that it's going through the Sillium agent Envoy. So you can do load balancing, URL rewrites and a lot of work has gone into Sillium service mesh being lightweight and being able to layer on top of your infrastructure. But there's also a lot of work to be done. So for example, here on the slide, there is a CFP for mutual authentication. That is a new feature that's been rolled out in Sillium 1.14 and we are continuously working to harden it and make it better for you guys. So there's a design CFP if you guys are intrigued on more of the networking security side of things and want to learn more about mutual authentication, I would suggest giving either this blog post to read or reviewing on this design CFP. So very briefly, this is what we've learned so far. Sillium as a CNI plugin, Sillium service mesh. Hopefully you can differentiate the L3, L4 versus the L7 flows. And again, Sillium agent is used for a task that's not achievable with EBPF. So what's next for you? Maybe you guys would like to try out Sillium. If you're not worried, if you are using it, great. Please leave some feedback. We have a very active community and because now we're CNCF graduated, we would like more companies and other people to contribute back to the project because we know a lot of people use it and we of course don't have every single edge use case that occurs and so we can only make the project better with the help of everyone else. So go to github slash Sillium or interact with the Sillium Slack. There's also different networking groups in there. So for example, gateway API service mesh and we would like, like I'm asking you to help us. So please help us out. If you don't want to contribute back yet, maybe it's a little too intimidating. I know it's a lot of onboarding to do. You can also read CNCF use cases if you're debating on using Sillium in your project. Other companies have had good success in scaling up and there's been a lot of talks around this cube contract. So if you do want to learn more about how the other companies have used Sillium and implemented it, there's CNCF case studies, which not a lot of people know about. I think it should be raised a little bit. There's also Echo videos. If you don't want to read something, you can watch something instead. So EPPF and the Sillium community upload videos weekly on different things that go really into the guts of things or different use cases and opinions. And if you'd like to be on an episode, you can reach out to the Sillium Slack community and they can make that happen. And then also lastly, these are other links that you can have to just learn more. There's a Sillium booth in the Open Project Pavilion downstairs and yeah, I will be around, I guess, during break time to answer any questions. I didn't want to do a demo this morning, but I do have a cluster spun up so maybe I can try to address individual questions to people. I'm not gonna show it on the screen because morning. But yeah, come find me later and I can try to help you out and debugging or I don't know, showing an example. All right, and with that being said, thank you so much for coming to my talk and have a lovely Friday.