 Hi everyone. Hi, can you hear us okay at the back? Thank you so much for everybody squeezing in. Clearly a bit of an over-subscribe session, but we're so thankful that you're here and hopefully you'll enjoy this talk. Awesome. So we start with a quick intro. I'm Nico, I work for Isovenants. Dan and Imi work for Isovenants, the creators of Silium and I'm a senior staff technical marketing engineer. Yeah. So my name is Dan Finneran. I'm part of the community team focusing on open source, so Silium, EBPF, Lodgey, Fimea, the Silium booth in the project pavilion for any other questions that you may have later on. Awesome. So why are we here? Yeah. We're here to answer a quick question. Now look at this and tell me, is this you? People are not raising your hands. You're all Kubernetes networking experts. You can come on stage now. No, Kubernetes networking is hard. I'm like a network. I've been in networking for about 20 years, and I still find it really scary, intimidating, and we're here to help you understand, make the most of it, and maybe explain why Silium is the answer. So Dan is going to start by explaining why Kubernetes networking is so hard, and he's going to try to explain Kubernetes networking without any of the jargon, any acronyms, or very few acronyms will try. Definitely going to fail. And really talk about what is the network meant to do? What do we need the network to do for our Kubernetes cluster? And again, explain some of the reasons why Silium has become the de facto Kubernetes networking platform. And we'll do a demo later on, and we talk about the newly announced and released Silium Certified Associate Certification, and take a few questions if we have the chance. Now, Dan, good luck. Merci. So Kubernetes networking, not a small topic, but we're going to try and cover it as quickly as we can, and as mentioned with minimal jargon and no acronyms. So, as Nico mentioned, networking is a tough subject, and Kubernetes doesn't really help with that. So out of the box, Kubernetes doesn't even provide any networking. If you've been using EKS, AKS, et cetera, you probably have just been using clusters without having to worry about these sorts of things. But if you spin up a cluster by yourself, a lot of things don't come out of the box, and networking is one of those things. And it kind of gets even worse. So Kubernetes networking introduces a whole other network that exists within the cluster itself. So you have the underlying network, and now you have this new network that exists within your Kubernetes cluster. So why are we here? What do we want to talk about? A little bit is why these things exist, and how we can use them, and how they all hang together. So one of the design principles of Kubernetes is isolation of workloads. Things are generally encapsulated and hidden away as best as possible from the infrastructure. So your applications typically will live within this new network. Applications that you deploy typically will live in this new network, which is commonly referred to as the pod network, and that's the embedded network that exists within your Kubernetes cluster. I have here is specified as a SIDUS. That's the first acronym that I've already shot myself in the foot with. It's basically a different network range that exists within the cluster that should be separate from anything external. And this pod network, the things that run within this Kubernetes network should ideally not really be directly connectable. So you don't directly access in most circumstances pods directly. We need to have a way of exposing those things that are running within that network. And all of this has introduced a few problems. Namely, who owns this network? We're starting to find a little bit now that things are maturing, people are deploying more and more things. And the networking team who have commonly been looking after switches and cabling and routing are now being asked to debug or manage these Kubernetes networks that they had no hand in creating or architecting or designing and don't really understand why they exist who put this there and things like that. Ultimately again, what tools do they have to or what tools does any of us have in order to understand, diagnose, debug this Kubernetes network? How do we observe the network? How do we understand any bottlenecks or anything that's kind of performance impacting? And finally, how do we enforce any control over this network? Typically before, we would use firewalls and we would have various things that would plug into the network to allow us to understand and enforce that. Again, we've got this new black box network that nobody understands and are being asked to manage, et cetera. So, with this, we're going to step through Kubernetes network in terms of what it looks like from two standpoints. One from an architectural standpoint and then two from the standpoint of the application developer and deployer. So, these are kind of core requirements for Kubernetes networking. Pods need to be able to speak to each other. So, we have our Kubernetes cluster made of multiple nodes. We have the network that exists within that Kubernetes cluster. And we have multiple applications that are deployed within pods. Pods will need to speak to each other as if they're directly on the same... Well, pods will need to speak to each other regardless of where they live. So, pardon me. So, that means that we'll need a mechanism of moving traffic between the nodes and then sending it back to the pod as if everything was all on the same flat network. We need to access those applications. So, as I mentioned, these pods that we have running within that network, we shouldn't really be accessing them directly. So, we need a way of getting traffic into our cluster, into that cluster network, and then onto the pods themselves. And we typically do that by exposing things through Kubernetes services. So, a service is accessed in a number of different ways, either internally. So, you may want to access pods that sit underneath a service internally. But in most cases, we'll want to access it from outside the Kubernetes networking cluster. So, we do that in two common ways. One is to expose a pod on all of the servers that are part of that cluster. So, that's a node pod. So, anything that accesses any of those servers on that pod will go to that service, will go to the pods that sit underneath that service. Additionally, we may use a load balancer service. And that will create a new address on the network with a port attached to it. When we access that IP address on the network, traffic is then routed into the service and then sits and accesses the pods behind it. So, this is how we tend to get traffic into a Kubernetes cluster. So, that's coming in. What about traffic going out? And this can cause headaches in a certain way. But in most circumstances, pods that are running within a Kubernetes cluster, when they send traffic out, the endpoint that you're accessing won't see the traffic coming from the pod itself. So, you won't get... It's not direct access to the pod. So, how does traffic work in both directions? In most circumstances, we will effectively change the destination address that the traffic came from to be the address of where the pod was running. So, as far as whatever it is that you're accessing is concerned, the traffic came from the machine where the pod was actually running. That means the traffic can go back to that machine and then go to that pod where it's actually up and running. And as your pods are rescheduled, they will go to other machines and the source address for where they came from will change to where that is up and running. That's fantastic in most circumstances, but there are issues with that. If you have thousands of nodes and you are trying to access something that is behind a firewall, then you're going to need to have a loss of rules in that firewall because you have no idea where that pod is actually going to be running, rendering your firewall largely pointless at this point. So, in order to kind of get around that, we have the concept of egress gateways and how that typically tends to work is traffic is funneled to the egress gateway and then the traffic will go from that egress gateway to wherever it's going to the outside world. So, regardless now of where your pod is actually up and running, it doesn't make a difference if it's on node one or node 1000. As far as whatever you're accessing is concerned, it's all coming through that egress gateway with that permanent looking IP address. So, that's kind of the architectural standpoint of traffic inside the cluster, traffic coming into the cluster and traffic coming out of the cluster. We need to kind of look at now how application users typically will use those architectures to get traffic to their applications that are running. So, first kind of requirement is we want to secure our traffic that's running within our Kubernetes cluster. Out of the box, there is no rules within a Kubernetes cluster about what can access what. So, all of your pods can speak to all of your pods within a all of your other pods within a Kubernetes cluster. So, in order to secure that, we have the concept of network policies and a network policy can kind of enforce which pod can speak to which other pods, which pods can speak to which other services and what we can speak to coming outside of the machine as well. So, we can have rules internally in a number of different places and they're analogous to firewall rules effectively but they're known as Kubernetes networking policies. We also may need to have things like encryption. So, for instance, as I mentioned, when pods speak to other pods and they're on different nodes, traffic is going to need that, leave that node, go on to the underlying network, traverse the network, go to the node where the pod is actually running and traffic will get to the end to end. If we don't control that underlying network, we don't know who may have access to it. They may be able to sniff that traffic and things like that. So, we may want to encrypt that traffic should it go on to an unsecured network where it needs to go to a different node. And then finally, accessing. So, we already mentioned things like node ports and load balances. Very simplistic, you know, layer 2, layer 3, layer 4 access into our applications. There are more common use cases or more advanced use cases even that people may require and we tend to look at that through things like ingress controllers. So, an ingress controller will be running within your Kubernetes cluster. It will be given an accessible address on the network. And when traffic comes into that ingress controller, we can look at what's actually happening. So, in this example, we can see we're accessing our ingress controller as a web service and we can see here we're hitting the IP address of the web service and we're passing the path slash yellow. Now, the ingress controller can be configured then to look at the path that we're actually specifying and determine that traffic then should go to this service and this selection of pods that sit underneath it. Likewise, if you configure a rule for the path red, then it should go to a different service. And ingress controllers are typically how you manage multiple services that are all being kind of web facing. That's typically been done through the concept of ingress controllers and ingress type objects within the Kubernetes object types and things like that. We're now moving towards a technology or a new technology, a new project within the Kubernetes space which is called Gateway API. The main reason for this is to pretty much standardize across all of the different networking types how we access things within a Kubernetes cluster. So, it will handle things like HTTP routes like we just mentioned before. So, you'll create a HTTP route for slash yellow, for instance. And that will work in the exact same way that the ingress originally would have done before. But later down the line, there's going to be things like TCP and UDP routes which will be kind of comparable to the service type load balancer. And there's going to be more routes that are being added as well. So, Gateway API is pretty much the future for Kubernetes networking moving forward. So, that's Kubernetes networking explained as quickly as I could. I'm going to hand you over to Nico. So, if we had to summarize what do we need the network to do, right? And we need the network, the applications to have accessible IP addresses. We need them to be able to talk with each other. We need to set up outbound access. We need to let them access the outside network. And likewise, we need to be able to bring traffic to them. We need our application secured. We need to provide some load balancing, make the applications resilient. We may need to meet some regulatory compliance and encrypt the traffic. And finally, we might need to operate and troubleshoot them. And all this is like, I've not mentioned the word pulse or anything. These are common networking tenants that must be applied regardless of the compute that you have. Whether it's a virtual machine, bare metal, pulse, all these are things that we need the network to do. But in Kubernetes, it means that you need CNI to provide the connectivity, the pure connectivity, giving an IP address to your pods and making sure they're able to talk to each other. We also need to provide the load balancing. We need to do things like the security and the network policies. We need to encrypt the traffic. And we need the ingress controller or gateway API. Finally, we need to connect multiple clusters together, maybe do some load balancing across them, use them for high availability and resiliency. And you can see my power point skillset here. And it's making up, and finally, the last piece of the puzzle is the network observability. And all this is beautiful. It's silly. Which is now the only CNCF project in the graduated category in the cloud-native networking. So I think we can say humbly that it becomes a de facto Kubernetes networking layer. And if you've been running Kubernetes cluster for a few years now, you probably have accumulated a set of tools in your environment. Maybe you've used a CNI for the network policy support. Maybe you added an ingress to bring the traffic into your cluster. Maybe you have Q proxy. Maybe you have a service mesh for maybe encryption or observability. Maybe you have something like metal LB for load balancing or for to assign IP addresses. And I don't know about you, but many people I talk to have tool fatigue. They're just kind of struggling with maintaining, operating so many different tools. Now, what we would advise, recommend you to, for maybe your next cluster is start with Selium. Deploy Selium. And you don't have to enable all the features from day one. But Selium can do all these things. Right? So you may not need to deploy some of the tools, but just deploy them as you need when, for example, when Selium doesn't support a feature. And what supports all this is EBPF. So I'm not allowed to talk about this for too long because I can and will talk at you for hours about EBPF. Who in the room has heard of you? You may have been wondering why there's bees everywhere. But who's heard of EBPF in the room? Has anybody written an EBPF? Whoa! I did not expect that. So EBPF is effectively the technology that is everything that Selium related is built upon. We use EBPF to control how the traffic moves around within a running machine. We use it to control the behaviors of that traffic to load balance the traffic. We use it to control network policies. EBPF effectively allows us to reprogram a running machine and do what we want it to do. So EBPF is the foundational layer of everything Selium related. That Selium itself is a CNI. That's Hubble being able to provide that observability. And Tetragon, which we're not going to have time to cover today, but we use Tetragon sitting on top of EBPF to control what can and can't happen on a running system. EBPF is the secret source for everything Selium-wise. But do you need to know EBPF to use Selium? No, but I would advise learning it because I think it's fantastic. The reason why we're not really going to delve into it is it's quite complex. It's very low level kernel programming, so you don't need to know about it. We have taken care of all of that for you. But if you do want to learn about it, then there's a lot of resources out there. And Liz will be signing a book at 6pm at the ice surveillance stand later about her book about EBPF. Fantastic. So we're going to start seeing Selium in action. I guess some priorities of the two core use cases that people use Selium for to begin with is around the support for the network policies to secure the cluster and for the observability. And I'll start with that. And then I'll show you some gateway API support. But again, network policies tend to intimidate a lot of people. Again, I don't think we explain it in simple terms, so I'll try to do that. If you look at the network policies, and you'll see a lot of our documentations, labs are around the Star Wars themes. If you don't like Star Wars, don't worry, you don't have to know it. But a network policy is essentially based around a couple of blocks and fields. First, you need to decide, work out who this policy applies to. And it's using selectors. You can use labels and say, this network policy will only apply to a pod with this specific label. So it's only going to be applied to my desktop. Then you decide in which direction that this policy apply to. So it's an ingress network policy. So this traffic, it's a rule that looks at the traffic to the pod. And then you just define from who. So here we only allowing access to the desktop from the TIE fighter over pod 80. Simple. And this one is another one in the other direction. Again, it's the same structure. So who the policy applies to, it applies to my TIE fighter. And it's the traffic leaving the TIE fighter and going to, well, it can go to the, whoops, here we go, the Disney.com API. So one thing that the Syllium network policy can do is filter based on the domain name. Not something you have in the Kubernetes network policy. That's something that many users would want to use Syllium network policies for is for the support for the domain name filtering. And we're allowing traffic to the, to the desktop again over pod 80. Again, hopefully simple enough to understand. So let's start with the demo. Demo time. All right. Can everybody see this? Okay. Right. So it's, I've got here is one of our hands-on labs. There can find them on isovalium.com slash labs. Free labs that you can access. We have about, I think, 33 now that let you test different use cases, features of Syllium tetragone Hubble. So here I've got my cluster, which is running Syllium. It's running Syllium 114. And it's on the Kubernetes cluster running kind. We've got three nodes and everything is healthy. And I'm using the Syllium status. So Syllium is, so I'm using the Syllium CLI, which is a binary that comes with Syllium. That lets you check the status of Syllium, change the configuration, et cetera. So Syllium is, my environment is healthy. And I've deployed a demo app. And we're just going to go and check here. So I've deployed a namespace, end door, like the end door moon. And I've got a death star, a tie fighter, an X-wing. And what I want to prevent with my network policies is the X-wing to blow up the death star. So we're going to implement some network policies to do that. So I've actually deployed a network policy that I showed you before. And if I try to access the death star from the tie fighter, so I'm running a shell in my tie fighter and connecting, you can see I'm going to my death star service in the end door namespace, then slash V1 slash request landing. So you can see the full path here. And you can see my ship has landed. So access was successful. The tie fighter was able to access the death star. Now, from the X-wing, it times out, because I've applied a network policy to block that traffic. The only traffic which was allowed was from the pods with the labels empire. If you recall in my network policies, I was matching based on the labels. And my tie fighter is not part of the, my X-wing is not part of the empire. It's part of the alliance, the rebel alliance, and therefore access is denied. Now, I can actually even check with our observability tool, because when you deploy a network policy, and it's natural to make mistakes at first, or maybe you have some security issues, you want to be able to, or connectivity issues, you want to be able to verify access and network connectivity. And for this, we use Hubble. And Hubble is a tool that comes with sodium, that's the kind of observability layer. It's kind of like TCP dump before Kubernetes or net flow for Kubernetes. So what I can do is I can look for, the command I've just run is Hubble observe, and I'm looking for all the traffic for my pod in the namespace andor, my X-wing pod to my desktop pods in the namespace andor, and I'm looking for traffic that has been dropped. And you can see that includes the traffic I've just generated before them. And all this is kind of, again, built in. There's no, I've not had to instrument any of this in my applications. They are absolutely unaware of this. And even better is Hubble comes with user interface. So if I look at my andor namespace, I can see how all my microservices are talking to each other. And I've not had to do anything around instruments, any of my applications. This was done, built automatically for me. So I can see that my, again, my tie fighter where, you know, try to access the internets. I can even filter based on the traffic which was dropped. So all my X-wing traffic was dropped, whereas my tie fighter is able to access the desktop as I want to. Again, that comes out of the box, nothing for you to do. Cool. Anything else you want to? No, I just think, you know, it's fantastic that it actually tells you that it's the network, that a network policy has actually caused this. Because so easy to kind of get things wrong sometimes in a Kubernetes cluster. Being a YAML warrior is all well and good until you misspell something. And then you've no idea why anything isn't working. So getting all this information at your fingertips is fantastic for troubleshooting and observing what's actually happening. And the last thing I add on the Hobble UI is also gives you visibility at the layer seven. So it's not kind of just layer three, layer four. It shows you the path. So if you recall when I did my curl, it was to a slash V1 slash request landing. And I can see this exactly in this. And again, I've not had to implement a service measure or any other tool to be able to get that visibility at the layer seven, which is super useful if you're using APIs and you just want to see traffic between all your different services. Cool. So that's demo one. And then how are we doing time wise? Okay. So I'll be quick. So what we also wanted to show you is a Syllium support for the gateway API, which again comes built in. You don't need to kind of install another ingress controller or another tool. Syllium includes a gateway API compliant with a recent version, which is 1.0. So suppose a ton of use cases. So for example, and that kind of show you how it's kind of like a reverse proxy. If you say you make a curl command to slash foo, it will send this to the foo service. If you do it to slash bar, we'll send it to the bar service. If you want to use GRPC instead, the Syllium gateway API can route GRPC requests directly to your GRPC services. If you've decided to move a service, a migrator service, you can use Syllium to do redirection. So if you have a user who's trying to access the old domain, you can send a 302 redirection code back to the user to tell them where to go. And the other thing I'm going to show you now is load balancing. So you can actually use Syllium to do built-in load balancing traffic splitting between, you know, for canary testing for AB applications. So you could say, I want 99% of that traffic to go towards this application, this service, and only one person to go towards a new version of your application. And again, that's built-in in Syllium. All right, let's go to demo. I will take a second or so. There you go. So here what I've got is I've deployed a gateway, and I've deployed a gateway which has picked up an IP address. You can see the 255.201, and it has received an IP address. Again, this is Syllium as a standard IP address. There's no metal LB in here. And my gateway is listing for traffic on 443, and it's set up in TLS termination. So the traffic from the outside client to my gateway is over HTTPS. And then the traffic from the gateway to the internal service is over Port 80. We also support TLS path through where the whole traffic is encrypted. So what we're also doing is we've got a gateway PR route, and again, it's attached to my Syllium gateway. And right now, all the traffic is going to go to my desktop service. So if I go and try this, you can see my if I do a curl from outside the cluster into this, my gateway IP address, I get a successful reply. I can, and I'm just connecting some information about my desktop. I can see how many passengers are on the desktop, the length and the cost. But what you can also see is the host name. So I can see the name of the pod. So it's a kind of an echo server. It replies back with the name of itself. So we're just going to do a quick... So the scenario here is, okay, I'm introducing a new application of the desktop, because desktop is often very insecure. It gets blown up by the rebels. So you want to have a more secure desktop. So I'm adding a desktop V2 here. But I've not done any load balancing yet. So if I run a loop, I can see how many requests still go to the new desktop and to the old desktop. As you can see, I know the source, the pod that is receiving my request. So I'm just going to go and... There we go. So we're going to see. So 200 queries went to the old desktop and zero went to the new one. So now I'm going to start doing some introduced traffic to my new desktop. So what we're doing here is we've got some weights. So 90% of the traffic will go to my old desktop, 10% will go to my new one. And again, we'll run the same kind of traffic generation and we'll go and verify that it's pretty much, you know, one for 10 has been sent to my new desktop. All right. Roughly 15. So Syllium is, again, we've not had to deploy any other tools to support this functionality. The Syllium is much more than just a CNI. Right. So we are almost running out of time. We've just been working very closely with the CNCF on a new certification called the Syllium Certified Associate, which is the entry-level certification. You can see here some of the domains and areas that are part of the blueprint, things around our network policy, around the architecture of how Syllium builds these kind of networks. And a few questions around multiple cluster connectivity with ClusterMesh and NBGP. And I've got a session tomorrow at the CNCF Learning and Training Lounge if you want. It's at 3pm, I think. Additionally as well, there's going to be learning guides that are being written as we speak. So the learning guides plus all of the labs are effectively a fantastic way for you to learn all about both Kubernetes networking and Syllium networking. And it will help you pass the CCA exam if that's a thing that would be interesting to you. So if you want to go and do some of our labs, you get some of the badges, but like I said, we've got over 30 odd badges, labs available. You can find us at the I7 labs and I will leave you with a feedback if you have any questions. Again, thank you very much for joining us. We do actually have a couple of minutes if you do have any questions and there is a microphone moving around. So if anybody does have any questions, please let us know. Does the Syllium support IPv6 fully? Yes. Syllium supported IPv6 before it supported IPv4 finally enough. Thank you. So a quick question. So assuming that I want to implement only the encryption, the mesh basically, but I don't want to change the engine mix that I currently have as part of my API gateway or ingress controller. So the question is can I have that without replacing engine mix, but I still want the traffic from the engine mix going to the pod encrypted? Yeah, you can pick and choose with Syllium. So the question is can I still use my engine x ingress controller, but get some of the connectivity benefits from Syllium? Yes, absolutely. Like I said, you don't have to activate all the features from Syllium, but you can just use for pure connectivity. More questions specifically for Nico? Yeah, I was wondering how you handled the BenWitch and the supervision of the BenWitch, and do you have a system to balance that? Do you have a system to manage the bandwidth? Yeah, for instance, you have all your pod and all the works goes there and so you want to spread your pod in order to spread the network consumption. Yeah, Syllium includes a load balancer by default. For any of the services, Syllium will load balance the traffic. If we're just talking about a bandwidth, we also have a way to limit the bandwidth of a pod. That's also another feature that comes with Syllium, which we're not talking about today, called the BenWitch Manager, and I think that's probably the only CNI that supports it. So if you want to control the traffic and prevent bandwidth starvation, Syllium is also capable of doing that. You present external load balancing, and if you want to manage another protocol that touches TTP, can we with Syllium? So protocols that are supported are pretty much defined by the Kubernetes objects itself. So Kubernetes itself will do TCP, UDP, and SCTP. Those are the three protocols that services inside Kubernetes are aware of. Yes, that's mainly it. Hi, thanks for the presentation. I have a question regarding the egress gateway. I have a setup. Obviously, I would like to receive traffic and direct it to a service by the egress gateway IP address. Is that possible? So that's traffic coming out of the cluster and then going back in again. Correct. Yes. Is it possible? Well, there is a feature coming out. I don't know if it's in the enterprise open source, but where you can use egress gateway with BGP. So BGP could advertise your egress gateway IPs and therefore this will enable you to access the egress gateway, so traffic will go in both directions through the same IP. That would be brilliant, because we're something that we actually need at the moment. We're actually using the egress gateway, obviously because we need to control the destination IP address for the pods, but we really want the traffic to come in on the same IP address. Yeah. I'm pretty sure it's just on the enterprise version, but I can... I've been informed that that's it for questions. We've been kicked out of this room for the next session. Thank you. Anybody who's still about and we'll be about at the ice of Valen and the Sillian Boots, if you have any further questions. Thanks, everyone.