 Alright I think it's about time to get started. Thank you everyone so much for coming to the very last session of what I think has been an amazing week because everybody had a great week! Yeah?! Excellent! Glad to hear it. So, you have come to a session about cilium networking. Was anybody at CyliumCon on Tuesday? Quite a few hands, excellent. While you have at that gorfod â silym meis. Yes, we have lots of meshes, so hopefully what I'm going to show today will clear up some of that mesh. Alright, so I think we all are familiar with the idea of Silym connecting pods within Kubernetes cluster, and a lot of you will have come across cluster mesh, which is how we connect multiple clusters together. Mae'r cyfnod ddoedd o'n mynd i ni'n gwrs, ddwy'r ddweud o'r troch ar y swydd, mae'n dweud o'r ddweud o'r ddweud, mae'n gweithio'n gyffredinol i'r meysgau cyllidion cyllidion. Rydw i eich bod yn ymdyn nhw'n gweithio bod yna mwy o unrhyw ddweud o'r ddweud o'r ddweud, o'r ddweud o'r ddweud o'r ddweud o'r ddweud, o'r ddweud o'r ddweud o'r ddweud o'r ddweud. Felly, SiLium mesh, mae'r rhai gweithio, rydyn ni'n�wch i'w dweithio wurthglu i ddweithio'r clusters i ddweithio unrhywbeth Cubanettau. Mae'n rhaid i ddechrau'r cyfle i ddweithio'r ddweithio a'r ddweithiau i'w ddechrau'r prymysg. Mae'n rhaid i ddweithio'r cyfleu i ddweithio'r polisi gwmhaith ac yn ddweithio a ddechrau'r ddweithio. Mae'r SiLium talks, mae'n dweithi ychydig ychydig y siwyd that we should have some Star Wars references. I'm not sure it's exactly Canon but we're basically going along the lines of Force Awakens. Right, so services and endpoints are Kubernetes concepts and I think you're all pretty familiar with services. Maybe we should just have a quick look at what we mean when we say endpoints. So we're going to go to Jakku, the planet Jakku ac BBA wedi gweld o bobl gyda'r cyllid y gweithredu cyflawn yn Jack Who. Ie ddim yn dda i fynd i'r gweithredu y rhaglen yn y ffors, ond yn gweithredu'r cyllid y gweithredu ar y ddweud gweithredu â'r ystod yng nghymddol, oherwydd y ddweud y gweithredu. Felly, rydw i'n meddwl y clyster hynny gyda, ydw i'ch rwy'r gwirio'n gweithredu? Rydw i ddweud y clyst. I have a kind cluster on Jakku and I currently have, I think BB-8 is here, and I'm going to deploy the resistance in Jakku. Apply even. It's all live, right? You'll tell me when I do things wrong. That is going to create some pods and we have a service. If I exec into BB-8, I should be able to curl the resistance. If we look at the pods and we look at their IP addresses, we can see there are three pods and they've got these IP addresses 65, 203, 47. If we look at the Kubernetes endpoints, we'll see here there's a service called resistance and it has those three IP addresses, 203, 65 and 47. That corresponds. We also have psyllium endpoints that represent the same thing. This is psyllium's representation of those exact same endpoints. We can see the same IP addresses there, 265, 203, 47. Good. This is actually R2D2. R2D2 is in the other cluster, which we'll go to in a moment. He can look up the resistance service and get an IP address that is the same as the service and then that gets load balanced or that request gets load balanced to one of those pod endpoints. That's a fundamental thing in Kubernetes. I haven't shown you anything new there. psyllium knows about those services and it knows about those endpoints. I showed you the CRD that corresponds to the endpoints. We could also exec into the psyllium pod and see the same endpoint list. They would look like this. That's just a basic connectivity inside a cluster. What do we need to do if we have multiple clusters? The resistance base is in the planet, I think it is a planet, Dakar. We want to be able to have a resistance service there as well. Let's go to the kind cluster, which is on Dakar. Here is where R2D2 is, I'm pretty sure. A bad guy as well. Make sure there is some resistance running here on Dakar. That's going to be very similar to what we just saw. Now, we have those two different clusters. One thing that's different about what I just deployed in Dakar is it's annotated to say it's a global service. This is all you need to do in cluster mesh to say that your services are global. They're accessible from all of the clusters in the cluster mesh. I do need to go and make sure that's true in Jakku as well. Let's just uncomment that. I'm going to have to go back to the other cluster. That was back on Jakku. We just need to reapply that resistance file. We've made a very small annotation on the service. Now, from Jakku, if BB8 sends a message to the resistance, sometimes he's going to get a response from the resistance on the base in Dakar. Sometimes it will come from, hopefully, eventually, yes, the original message that we saw on Jakku. That service is accessible from either cluster, but they're just services still. What about the end points? Well, let's look at the service first on Jakku. The service looks exactly the same. It's called resistance. Let's look at the end points. From Kubernetes point of view, it's still only got these three end points. Those three are running in Jakku. That seems a little bit strange. Let's look at the psyllium end points. That also still only has the three end points. What's going on? Let's have a look at what's happening inside the pod, inside the psyllium pod, psyllium agent. That's the name of this agent. I hope I've actually applied that correctly. I'm going to exec into the psyllium pod. We can run commands directly on psyllium here. Let's look at the service list from inside the agent. Okay. Here, this is, we can just check that this is the right service that I'm looking at. The service for the resistance end in 64, that corresponds to this here. From the psyllium agent's point of view, we've actually got six end points. Three of them have this at two, which is the second cluster in the cluster mesh. It's a very simple change from psyllium's point of view. These are just IP addresses that have to be reachable. BB8 on Jakku is able to reach the service in either location. If we went to Dakar, we could actually change things to have local service affinity. I'm going to go back to the planet Dakar. This is where R2D2 is hanging out. I need to just reapply this change that I've just made so that it's local affinity. Resistance car. What that's going to mean is if R2D2 asks for assistance from the resistance, we're going to prefer getting a response from the local cluster. Let's do that. R2D2, call the resistance. These should always say that they come from the base on Dakar. However many times we do this, we'll never get load balance to the service on Jakku. We have this ability to load balance across multiple clusters or to prefer local services. We could also prefer remote services, which could be really handy if we're trying to turn down the service on the local cluster in favour of moving to a different cluster. From Cilliam's point of view, they're just endpoints. Really simple. We don't need any special abstraction to understand how to load balance to these endpoints that are not within the local cluster. We can extend that to environments outside of Kubernetes. All we really need is an IP address that we can root to. Don't know if you remember, at the end of the Force Awakens, we find that Luke Skywalker has been hanging out on the planet Ack 2. Ack 2 is not a cluster. He's pretty isolated. It's a VM all on its own. Let's see Ack 2. If I look at my Docker containers, we can see there's the Dakar cluster, there's the Jakku cluster. I have a local registry. Here is a VM. This is where Luke is hanging out. How can we communicate with Luke? Ack 2 actually knows how to communicate to Luke. What we need is an endpoint in Dakar that just happens to be external to the cluster. I need to... Let's go back here. Make sure I'm in the right context. I am in Dakar. This is what I need to do. I'm going to add... I need to run this inside the psyllium pod. Let me just get the pod. Execute into there. This IP address here, the 172.19.100.2, that's the VM Ack 2. That's created an endpoint. If we look at those endpoints, they're quite hard to read because there's a lot of text going to come out here, but we should be able to see somewhere up here. We'll find there it is. There's our Jedi Luke who happens to be hanging out at this IP address. R2D2 can't root directly to that, so he needs a service. We need a service. We're not going to have a pod. There's just going to be a service that we're going to deploy. Come on. There we go. We're going to apply Luke. Now we've got a service called Luke, and that should mean that if we exec into R2D2, we should be able to reach Luke. There we go. Great. We've been able to get a response from that workload that happens to be running in a VM, and all we had to do was make sure there was an endpoint associated with that VM's IP address. Here's the slide version for if it didn't work and if the demo gods weren't smiling on us. We basically just got a service with an IP address, and that IP address corresponds to an entry in the service list. In fact, we could just look at that. Let's just make sure so let's get that silly and pot again, and exec into that again, and we should be able to see service list. There's our service that corresponds to Luke right at the end there, and it's load balancing to one endpoint, which is the VM. Okay. That could be really useful if you wanted to connect to a workload that's external, but what if you want to then migrate that workload from its VM, you turn it into a containerized workload, and you want to start running it under Kubernetes, we could actually do something like this, where we also deploy it as a pod, and we can start load balancing between the VM and the pod, because it's just an end point. It doesn't matter from the service perspective or from Syliam's perspective, it's just an IP address. So let's do that. We have here Jedi pod, which is basically Luke. He's going to say something slightly different this time if we deploy that. Just make sure I'm in the right. Yes. So Jedi Yamel. So hopefully now, if we are RTD2 and we talk to Luke, we sometimes see the message from the local pod and sometimes see the message from the VM. It's going to show me lots of it. There we go. So we've got load balancing between a workload on the VM and the same pod running the equivalent service locally in the cluster. One of the really nice things about doing this with Syliam is that we can protect those flows, the network flows between workloads, whether they're external or within clusters locally or remotely using network policies, because the network policy doesn't care whether the IP address, whether that endpoint is local or remote. It just knows it is an endpoint and the policy either applies or doesn't. So we could do something like this where we ensure that only resistance traffic can flow to that, to those endpoints. So let's apply. I've got a cluster-wide network policy we can apply, and that's going to make sure that only resistance containers will be able to communicate with Luke. So if I am, I'll just show you that I have both Kylo and RTD2 here, and if I am acting as RTD2, you've already seen this, we should be able to speak to Luke. It's all working fine, but if I am a member of the first order, as Kylo Ren is, then that's just going to hang because the packet's been dropped due to network policy. Okay, so now I'm going to turn to a different demo that's been set up by Martinez, my colleague Martinez, where he's actually got this running partly in Google Cloud and partly in AWS and using this exact same endpoint feature to have client VMs on one side of what we're calling a mesh tunnel and a workload and engine export running in a completely different cloud. So let's see if we can see this working. Okay, so here are my what's running in GCP. We can see an engine X service, and I've got also AWS, and if I look at the services there, we can see again an engine X service there, but what we're going to do, we're going to communicate from Google Cloud via this transit VM to the AWS cluster, so we can't go directly from these VMs or we can't go directly to this Kubernetes cluster, we're going to go via this VM. And so that means that because the transit one is on Google Cloud, we're going to use the service address on Google Cloud from these two different clients. So that should mean here's one of those VMs called Good Client. Let me just get that IP address. That's Google Cloud. No, it isn't. That's AWS. This one's Google Cloud. So I should be able to get an engine X response from that address, and we do. Whereas if I go to the bad client and make the same request, that's timing out. And again, that's because of network policy. And we can also see that if I find the right screen. Yeah, we can see that in Hubble, Hubble UI. And yeah, there we go. So we can see on the left here, it might be a little bit small. Let's try to make that a bit bigger. We can see some traffic from the good VM and the bad VM disappearing quite quickly now. I've made it. We can see the packets being dropped from the bad VM. Okay. That was amazing. The demos all worked. Nothing broke. So something that I haven't shown you there is this idea of authentication and encryption. And I'll be honest, that is not quite ready to demo, but we're making a lot of really good progress here. You may well have seen us talking about this next generation mutual authentication and encryption. And what's happening here is we use an identity management system. Spiffy is going to be the first integration here to get identities for the workloads that are going to communicate. And those might be services inside the cluster. They could be external services. It doesn't really matter. Because again, it's just about having an identity that we can associate with the endpoints that back up a service. So we have these certificates. We do a handshake at the start of the connection. And then we can use that to establish authentication, inject the subsequent certificates into the kernel so that we can use that for encrypting the traffic between those workloads. Doesn't matter whether they're in cluster or out of cluster. You may have seen us talking about this in the context of service mesh, but it's useful for so many other scenarios as well. So that's coming along quite nicely. The data path part of that is already in Cillian 1.13. And there's more work afoot on the kind of configuration side, the control plane side, if you like. What's going to be really nice about this is you can specify the requirements for authentication and encryption as part of the network policy. So again, we're not having to create a whole load of new abstractions. We can use the existing abstractions, but extend them to say, not only am I going to allow traffic between these endpoints and those endpoints, but I'm going to require that communication to be authenticated and optionally encrypted. We do have users who actually want it to be authenticated, but not necessarily paying the encryption cost. So that's another nice feature about this next-gen approach, being able to independently have the encryption or optionally have the encryption, I should say. The last thing that you may want to, well, the last thing that I'm going to speak about that you might want to do in terms of connecting to external workloads is the idea of advertising those services over BGP networks. So your clusters could be very distributed and using BGP to communicate between them. So for example, when any member of the resistance wants to go to the cantina in Takodana, they probably have to use BGP to get there. So wrapping up pretty quickly today, but then I guess that means we'll have time for either questions or you get to go home. Wrapping up, what do we have? We have the ability to connect workloads. We've always had the ability to connect workloads with Cillium, but now those workloads could be in any Kubernetes cluster or in any non-Kubernetes environment, so long as we have an IP address that we can reach them through. That could be IPv4 or IPv6. That could be in your choice of public cloud. It could be in a non-prem environment. You've seen GCP and AWS there communicating using Cillium Mesh. All secured with network policies and the work is coming along very nicely to use that next gen MTLS to provide the authentication handshake and the encryption between those workloads. Thank you very much. Actually, did anybody manage to pick up some books, either my book or Natalia's book over the course of the week? Excellent. Good. If you didn't manage to pick up a physical copy of the books today, you can download them from isovalent.com and that'll give you some insights into the EBPF technology that underpins Cillium. The other thing I'll mention on this slide is isovalent.com slash labs, where there are a lot of really great sandbox labs tutorials that can walk you through things like the BGP connections that I didn't have time to show you today and many, many other aspects of Cillium. With that, thank you very much and I hope you've had a wonderful keep on. I guess there is time for some questions if anybody has some. I don't know how we're supposed to do with them. Apparently, there's microphones on the two sides. Am I audible? Yes, you are. My question goes regarding connecting to external service. You showed you have to create an endpoint using Cillium. Is it possible to do it a native way with Kubernetes endpoints? I'm going to say yes, yes and no. There's the concept of an external workload where you might run a Cillium agent co-located with that workload, in which case, yes. What I did there, I configured it manually and I think that probably speaks to the experimental nature of this. There will need to be some control plane to configure these endpoints in a more usable fashion. Thanks. So, supposing you have a service that is redundantly served by two different clusters to the external world, let's say you want to expose those such services through ingresses in either cluster. Could the ingresses in either cluster connect seamlessly to the back-end services in both of them? Yes, they could. I'm pretty sure we have a blog post, a nice, a valent blog post that shows exactly that because I'm pretty sure I saw the diagram in the last few days and it has exactly that two different ingresses into two different clusters and then backed by services. Should ingresses be Cillium ingresses or could be any ingress? Is it transparent to the ingress? Yes, transparent to the ingress. Thank you very much. Yes, just to expand on that, from the ingress's point of view, it's just talking to a service so it doesn't matter what the ingress is, it's just a service underneath. Again, Kubernetes' fundamental concepts here. So, thanks for the presentation. Really good stuff. In the demo where you were setting up the end point for local outside of the cluster and you had to set up services as well, can this service be a headless one or it should be a cluster IP service? It could be headless. Yeah, I don't think it matters. I mean, depends how you're going to address that service, but yeah. I mean, what we have tested before with the Cillium service mesh, it wasn't able to support services like to, you know, split traffic and load balance between different ports behind the service, between different services when the service in front of them is headless. I'm surprised by that. I'm not going to say I've tried it myself, so I'm not. It seems a little surprising, but we can take that offline and figure out if there's a reason for that. Yeah, sure. Okay. Yeah. Sorry. One over here. So, for the example with the look that was outside in AVM, I assume that in that example the ports directly talk to that IP of the VM of look without even introducing the service. No, it was going via the service. But the IP was not directly routable because it read that there was nothing running in the VM of look, right? So in the VM of leak, it's just the, so we're trying to get to the, I wonder if I can find the diagram. Yeah, no, let's go to the one where it's just here. So, Luke has, Luke's VM or the app to VM has an IP address. We put in place a service so that RTD2 could address the, I mean use DNS to look up the IP address of the service and then the service was back by the endpoint that happened to be external. Okay, so the question is that if R2D2 know beforehand the IP, it could connect, right? Oh, I see. If he knew directly the, yes. If it's routable from, yeah. Because we don't create anything special. The VMs of the cluster or nodes have routes to that. Yeah, if it's routable from that. So now the follow-up is, and so the follow-up is could we do something that the port of the service of look is not open, so it's not reachable directly, even if you know the IP. So somehow another CM agent, the VM of look advertises routes, but it's not a Kubernetes cluster, would that be possible? I'd have an agent inside Luke's VM to join the network of the cluster. Yeah, so you could, I mean another approach would be, and we do have this in the external workloads approach, is to have a kind of a Kubernetes cluster running, but without the workloads necessarily being pods. Okay. You know, there could be like host network accessible addresses. Yeah, thanks. Yeah. Hi, so is the Cilium mesh new feature or is the name for all of these features you were presenting? That's a really great question. I think we are experimenting with that name and seeing whether it, I think of it as covering the whole thing, because that's what Cilium should be about, should be about connecting all your workloads without you having to think too hard about exactly what the mechanism is between them. In order to implement what we're calling Cilium mesh, we're having to, you know, there is a little bit of change in the ability to add those endpoints and, you know, still work in progress around the kind of control plane for those external endpoints. But yeah, to my mind, mesh is the whole thing, but we'll see what settles. Thank you.