 Alright, so we're going to talk about the thing that you may have seen a blog post about, which is as we push towards 121, getting ambient working with all different kinds of CNIs, and kind of why we did that, why that was important, what was hard about it, and then a little bit of how it works. And I'm Ben Leggett, and this is Yuval. I'm Yuval Cahavi, yes. I'm a Chewbacca date solo, yeah. Yep, and I'm just a software engineer at Solo, but it was fun working on this with Yuval. Oh? Yeah. Holder. Okay. Yeah. Okay, they were good. Alright. Yeah, so just to kind of give you context, this was new in 121, right, this new CNI approach, and to kind of give you some back context on ambient, if you don't know already, it's an opt-in mode for Istio that removes it for sidecars. It uses a tunnel on the node written in Rust to manage layer 4 traffic and give you MTLS and other things such as telemetry on the node without the use of sidecars. It's much more efficient. It's simpler, cheaper, faster, and for a lot of people, it's going to be a great baseline to get their workloads protected with MTLS and to get basic telemetry out of them without having to inject a sidecar to every single pod, and at scale, this becomes more important. Alright. Yep, next one. And then I think about this, too, is that you don't actually have to, like, touch application pods to enroll stuff in ambient, right? It's a label. It's a very touch-free thing. So just for context on what CNIs are, CNI stands for container networking interface, and it is essentially the thing in Kubernetes that makes sure pods can talk to each other. The packets can get from pod A to pod B and back. And every time you install a pod in Kubernetes or schedule a pod, CNI is invoked and makes this happen. If you have a Kubernetes cluster, whether you know it or not, you have a CNI in there somewhere. There are many, many implementations, a lot. And every major cloud provider has their own variant of most of those as well. So there's a lot of permutations. And critically, yes, CNI is the thing that would enforce network policy. So the big question, though, is, like, people will get confused about, is CNI a service mesh? Is the service mesh a CNI? What's the difference? Are they the same thing? Like, what's going on with that? And they're not actually right. They layer on top of each other. CNI, by definition, doesn't do a lot of things. It's a very basic thing designed to make sure packets get from one pod to another, right? It is not interested in things like MTLS or telemetry or anything like that. Pretty much all CNI does is enforce network policy in Kubernetes, which, if you're familiar with it, is a relatively rudimentary, important, but rudimentary API. And it's just basically making sure stuff can get from one pod to another. And that's kind of where it stops. Some meshes ship their own CNI and build the service mesh bit on top of that and, like, tie it together in a package. Some do it separately. And that's kind of the tack Istio took, which is keep them separate, right? You've got a CNI, whatever CNI you have, that's what you're going to use. Istio wants to sit on top of that and extend it. And yeah, that's kind of the deal, right? Everybody's got a CNI. If you've got a cluster, you've got it. There are many different CNIs. Istio wants to sit on top of that. So what do we do? Especially since they're all very different in how they're implemented, right? We have to install. It has to work on every single CNI, and there are many CNIs. And the reason that's hard is because as many things go, the CNI is just a spec, right? It tells you how this thing can be accomplished, but the implementation is widely, widely varied between CNIs. There are many, many differentations, right? There's EBPF, you can shunt packets with that, VXLAN, tunnels, whatever. For every CNI implementation out there, how it actually implements packet routing is very, very different. And this makes it quite difficult to write something that sits on top of that and works with all of them without weird corner cases. So this is kind of like the question, right? You've got a CNI. You've got to make sure packets get from one pod to another. Where do you do that, right? You've got a couple of options. You can do that in kernel space down here, right, with EBPF. And there are some CNIs to do that. There's some where maybe you do it in user space with IP tables or something else, right? You can do that on the node and the network namespace on the node, right? There are many places where you can do that. The CNI can accomplish that packet shuffling, all right? So the question for Istio is like, where do we put that, right? If we want to sit on top of that and we have to actually take packets and wrap them so that we can do MTLS, right? We have to actually intercept traffic and do things with it, but not interfere with the CNI whose job it is to do that on a very fundamental level. So where do we hook, right? We could hook in user space and the node namespace there with IP tables. And that will work nicely, perhaps, if your CNI is an EBPF space. We're in different layers, so we don't conflict. But if your CNI is with IP tables, then you get into the whole, how do we not step on each other's IP tables rules? And if we did it in EBPF, down in the kernel space, well, how do you extend someone else's EBPF programs? Well, there's not really a way to do that. That's like compatible across vendors or any implementation, really. So wherever we go here, outside the pod, we're gonna step on somebody's toes, right? There's gonna be some CNI that does something that doesn't play nice or there's gonna be a high potential for operational problems because we're just gonna be stepping in somebody else's playground, basically. And I'll just say that we tried. We tried that, yes. We know this because we tried this, but yeah, next. And yeah, and so this is why it was really hard, right? If we wanted to do either of these things, support CNI on the node in the user space or support it in kernel space with EBPF, either approach would have a whole lot of corner cases. If we did write something that did that, it would have to have per vendor exclusions and workarounds. We would probably have to have multiple implementations on the Istio side per CNI approach. It was just gonna be kind of a mess. And then we thought, okay, well, you've all thought, why didn't we have this problem with sidecars all these years, right? Why wasn't this an issue? And that was specifically because for sidecars, we did all this stuff within the pod, obviously, right? We didn't have this problem of should we be outside the pod in the node or should we be outside the pod in kernel space? We didn't have this problem because we weren't outside the pod at all, right? And this naturally is what sort of the ecosystem of CNIs have built up around and doesn't have any real conflicts, because this is what we've been doing for ten years, right? So CNI, again, it enforces network policy. So it is primarily interested in getting packets from pod A to pod B. So it operates outside of the pods. Space in between pods is what it's concerned with. So what happens in the pod is almost invariably invisible to the CNI by definition. That's not the space it serves, right? So yeah, so the idea is rather than go here or here and try to extend to these spots, why not just go inside the pod and then we end up with a model that looks quite like a sidecar in that way, but avoids a lot of the integration snafus we would have if we tried to extend outside the pod in these two spots. So then the question is, so how do you do that, right? The whole point of Ambient is we don't put things inside the pod. We don't inject sidecars. We're not doing funky things when you schedule pods to update your manifest and stick it in a knit container in there, any of that stuff. We don't want to touch your pods. So how do we actually do that where we can capture traffic inside the pod without mutating your pod manifest or doing anything particularly funky with that? Is there a way to even do that in Linux? Yes, there is. So get ready. It's Linux time. Yuval is going to show you. Yes. So we're going to do through a quick demo. It's going to be a bit underwhelming because of the simplicity of Ambient, essentially. The idea of what we're doing is re-leveraging the existing ETO CNI plugin to provide the ETO Ambient Zetanol, the component that sits on the node and shuffles traffic, access to the network namesets of the pod. And that way Zetanol can start its proxying from inside the pod in it, namespace, very much like a sidecar. So let me move. If it will work, OK. You can see that, I hope. Oh my god, I can't see the type here. You might have to duplicate. The what? You might have to duplicate so you can see it yourself. Yeah, let's do that. Oh my god, sorry. OK, we'll make do with what we got here. Eh, label, one sec. So first I have here ETO installed. Let me just do a quick, let's look at the pods. You see everything's ready, one out of one, nothing special. Let's do a curl. And this is a vanilla install of ETO 121, right? Yes. It was just released. So you can see curl works, and it accesses port 8080. There's nothing here yet. And let's turn on ETO ambient mode for that namespace. I have it. And again, in ambient, the nice thing is you can simply apply a label to the namespace or to the workload. And that will mark that workload for capture. You don't have to have a webhook to update the manifest and stick inside cars or init containers. We're not doing that. It's a dynamic label. You can label and unlabel whatever you want. And it gets captured by ambient. And then we'll have MTLS and a little bit of telemetry. I'll make it bigger. So we'll update the label. Here we go. So as you can see, if we look at the pods here, still ready out of one out of one, no new container. Nothing happened. It's all same as it was. But what happened right now is that we've provided the Zetanel with the networking spaces of these pods. And this time, if we curl, OK, it's a bit hard to see. But you can see that instead of port 8090, we're using port 15,008 here. And that's the Itzio HBON port. So this traffic is now encrypted with MTLS and all of it without changing anything. All we did was one simple operation, including the namespace in the ambient mesh. And Itzio did everything else for us. It provided the Zetanel with the pod network namespaces, set up a redirection rule, set it all up, and it's all just working. And that's kind of what I said is then we'll be underwhelming, because I just applied the label. But a lot, a lot of stuff happened behind the scenes. And that's why I have this TCP dump working in the background so you can actually see that something has changed. Yeah, yeah, and like that's the main thing, right, is that we have taken the pod network namespace. We jump in there with the Istio CNI agent. And we actually set up redirection, network redirection inside that pod, even though there's no sidecar. So we have stuff that enters and leaves that pod being redirected under the covers to the Zetanel on the node. And so traffic that is entering the pod gets captured and sent through that Zetanel, transparently, to the workload. And any traffic that is leaving that pod would get captured right into the Zetanel before it leaves the pod, right? So this is, again, all the traffic is gonna enter and leave the pod as before, but it is going to the Zetanel and the port might be transformed. It looks as though there is a sidecar there, but there is not a sidecar there. But traffic is still ingressing and egressing from the pod itself, even though it is now wrapped in MTLS and going through the node proxy. Exactly, so everything's going through the Zetanel. These pods are fully in Istio right now, right? Any Istio policy would work on them. There's no need to restart, there's no need to do anything. And we're so confident that approach that we actually did a live demo, and it worked, so. Yes, and he sat here trying to load containers over conference Wi-Fi, which is always a bad idea, but we had to. But yeah, so that's primarily it, right? Is that traffic is entering and exiting those pods, but now that they're enrolled, everything is wrapped and everything is going through Zetanel. And the nice thing about this is it means things like network policy can still work, right? Because even though we're in the mesh, it's MTLS, the packets are still leaving and entering from the pod itself, even though once they hit the pod, they are going through Zetanel and back out, right? Or back in. So network policy that applies to a pod would still work here, although we might change the egress port once we are tunneling, right? The whole idea of ambient is that the MTLS stuff is tunneled over 15,008. And so obviously if you had a network policy that was expecting it to be plain text egressing, it would not work, but that's good. Because now we're in Istio land, we're going through a tunnel, and Istio's policy can apply to that tunneled, encrypted and protected traffic. Yeah, that pretty much it. All right. Do we have any questions if we have time for that? Yeah, this concludes our demo, and... Yeah, and just kind of a recap, right? So the nice thing about this approach in pod is it works with all the CNIs. We've tested all these like AWS, Google, whatever, Calico, Silium, doesn't matter, because they're all operating outside of the pod, right? So there's no conflicts, there's no worrying about stepping on each other's toes, and Istio can compose with whatever CNI you've already got and give you the NTLS you want and the policy enforcement you need on top of that, whatever you've got, which is the nice thing, right? Yeah, so if you tried Ambient before, it's a 121, I suggest try it again, and you wouldn't have the problems with your CNI. Yeah, yeah, it should work with everything, and if it doesn't, file a bug, and we'll fix it. All right, if there's any questions. Oh yeah, definitely, you remove the label. Oh, can we remove the pod from the mesh? Why don't you go ahead and... If you remove the label from the namespace, the pod will be removed, let me show that. No, this is a very technical thing he's about to do here, so be careful, we're gonna find out. Let's see, so you know, pod in the mesh, outer mesh very easy, just one line of YAML, yes. Can you have a pod in the namespace that's labeled, it doesn't do this here? Yes, you can override these settings on the pod level. Yes, please. That's a great question, it is guarantees, and there's one case where it might not be guarantees, is where you first spin up a node, and we have fixes for that as well. When a new pod starts up, part of the CNI setup is this. So there's no, all of this happens for a new pod before the pod starts up, before any application pod, application code runs. So yeah, there's no wobble. Can you migrate from sidecars to ambient without downtime? Of course, yes. So we've, Zita has talked to each other HBON, and we've added HBON support to sidecars as well, so they can all talk to each other, and you can have a gradual migration. Yes, all of that stuff should work. You would have to restart a pod that is currently has a sidecar to get the side, get rid of the sidecar, but yes, you could do it piecemeal. Oh, if you want to, yeah. Move a workload. Move over, yeah. Yeah, so the question is, does it apply to gateways as well, like Ingress and Ingress, and how about the certificate, like do you use the current way of certificate providing or is it still mutual? So currently, the gateway is Envoy, so we don't need Zetanel there right now, but it's something that we're talking about, maybe changing and having it also be Zetanel-based. If you have a different gateway that's not Zetanel, yeah, you can use Zetanel on it. As far as MTLS, currently, I think we have a specific support, am I lying? In Ambient, we do not currently have specific support. Oh yeah, but that's on the agenda. And yeah, in the works, but you can use Zetanel today, and as we expand, we'll expand that out to other providers. Is Ambient fully support multi-cluster setup through East-West Gateway, for example? Multi-cluster setup for East-West Gateway. Not yet, I believe, John may correct me if I'm wrong here, but we have plans for that as well. Yeah. So what John said is multi-cluster, but not yet multi-network. So if the clusters are all on the same network, but multi-network where you may have overlapping IPs, that's in the roadmap. All right, I think we'll wrap it up. One more question? Okay, one more. So East-West side cars can work actually on VMs outside of Kubernetes, right? What about Ambient mesh? How to make it work with personal machines outside of Kubernetes? Yeah, so how does Zetanel work with VMs? Yeah, so currently in Istio right now, the way you would do that is probably have to run a Zetanel inside the VM context. It's a little shonky right now, but we are working on making the VM integration story a little better. And in theory, it could work in a very similar way, right? Because once you've got packets inside the context, the network namespace, the VM, whatever, we can do the sort of the same model. We have to be the deployment on the VM side a little bit different, but the network topology would look the same. Or could. Alright, I think we're good. Thank you everybody. Thank you guys. Cheers and applause