 Thank you so much. Hi, everyone. So my name is Editha, as I was represented. And we will talk to you right now today about reducing the operational complexity of a service match. So again, I'm Editha Vin. I'm the founder and CEO of Solo. Yuval, you want to turn? Yeah, I'm Yuval Kohavi. I'm the chief architect at Solo. Yeah, and honestly, let's just dive into the technology, I guess, that's what we are all here. So let's, one second, kind of like step back service, which is a really, really important piece of the infrastructure and something that a lot of people are familiar with today. But let's kind of like take a step back and just verify that everybody is understanding why the service match and why is this job? And then we'll talk about the future of it. So in the nutshells, it's very simple. You have two services, service A and service B. And before service match, it was very simple. You know, a request will go to service A from service A to service B, and it's basically as little flow. Now, before service match, what we were doing in order to make that direction and all this operational code, we basically, in our microservices, we put two things. We put basically the business logic of their microservices itself and the application. But we also needed to embed some operational code, like a code that may be responsible for logging and a code that responsible for, you know, pieces like security or routing or anything like that. That will be inside the microservices, which means that every time that you want, for instance, to upgrade your business, your operational code, you wanted to change the configuration that basically how the application is flowing or it's being secure or it's being observed. In that point, you actually also need to redeploy your business application. It's basically going together. It was the couple. When you deploy one, you deploy the other. When you upgrade one, you upgrade the other. And basically, the idea with service match was to abstract that, right? Basically, let the microservices on everything that related to the business logic of your application and have next to it something that will be responsible for the operational code. And this something is basically a proxy. So that's basically the idea of how we, this is implementation detail of what the service match can bring. So as I said, it was focusing on the fourth things when it's actually started. And honestly, the PM people who did a work on the service match, knelt it from the beginning. This is the problem that people tried to attack. That's exactly what service match was taking on. The first one is everything that related to the application of a routing, right? Two microservices communicate to each other. The second thing is everything that related to observability. Now you have a lot of little pieces of microservices and a lot of replica of them. So it's not very clear when the request is coming to us or how do you collect that log? How do you understand what's going on? How the application is flying, right? The third one is anything that related to resiliency. So stuff that related to retry, fault injection, timeout, all of this. And the last one is everything that related to zero trust security. Everything that related to empty a list to encryptions of the application when it's going on the wire. So that's basically what service match is taking and the way it's actually working. Again, and I just, I know it's very basic and I'm sorry but I need to level set. It's basically, there is the proxy who is basically a cycle to the service. You're putting it on the same pod and basically we are basically tweaking the IP table to make sure that every traffic in and out from that microservices will go through that proxy cycle. Okay, now you're doing it everywhere and you have a lot of cycle and a lot of microservices but the thing is that those proxy who is usually Android proxy are extremely powerful. They can do a lot of stuff but also they are not really smart. You need to tell them what to do. And that exactly is the response of the control plane. So the proxy, it's what called the data plane. This is like the traffic is going through that when the request is coming it's only going through the data plane. The control plane is not involved whatsoever but in order to do the data plane, the proxy, do you know what to do when a request coming to is getting that configuration for basically what's called the control plane. And if you guys know, for instance, in STO, it's STO. And it's basically collecting the data, watching your environment, taking the configuration from the user, watching your secret and then every time that something change one of those configurations change, one of those environment change, one of the microsets went down and up. So basically it will take it, it will translate. It basically will create a snapshot to the Android proxy and give it away. So basically in that case, all the proxy know what to do when the request is coming. There is no data plan involved whatsoever when the request is coming. Okay, so that's really, really trivial. So again, how to work kind of like when we find there is there a request when a request wanna go from up one to up five in that case, the hour is basically the request. It's fair it's going to go to the first site car, right? It will do some, you know, translations and everything that you need to do. It's going to go to the other site car on the destination and it's going to go to the microservices. So that's basically how it's working. Now, with this model, there is potential some problem. Now, every service manager that you know today, this is how it's working. This is something that I'm running in production. We have hundreds of customers, they're all running it very successfully in production. But there's also some challenges and that's some challenges that we helping our customer to overcome with. And there is a lot of work around, but honestly in the nutshells, the challenge that exists is around overhead of costs, operational complexity and performance. So I will go a little bit, give you some example and glimpse on some of the problem. So for instance, if we're talking about operational complexity, let's say that first of all, when you're deploying the mesh you need to redeploy all your application in order to inject the site card. If you want to upgrade, let's say that you have, for instance, a CV and you wanted to have a CV on Android and you want to upgrade the application or basically we have to redeploy the site card. But what happened in that case is that, you will deploy the application one, for instance, is up. Let's just assuming that it's my SQL, right? In that case, it will try to basically connect to the outside world. When it will fail, it will crash. Because sometimes the site card is taking a little bit of time until it's actually up and sometimes the application could be up before the site card itself. And when it will wake up, if we're trying to connect, if we will not be able to do this, sometimes it will crash. It's not happening in every application, but it could be in some, and that's very problematic because then you're creating kind of like the, you know, crushing loop and we need to somehow overcome it. So again, you have a CV, this is something that you definitely need to fix, right? You have to redeploy the application. That's not a simple thing. It's a challenge problem. Specifically because your application is all by other team, usually the application team and you are maybe the platform owner and how's that being coordinated? Again, it's a CV. You really need to fix it as soon as possible, right? So that's the first one. The second example of a problem is that let's say that you're running a job. So in Kubernetes, this is something that people are just doing. The only thing is that next to your job now you will have a cycle. When the job will finish, the cycle is not going to finish. It will stay there, which means that your port is not going to be deleted. So you can find yourself with a lot of port flying there in your infrastructure, with basically just a cycle and doing nothing. So that's another problem that potentially can happen. The third one is everything related relatively to latency. If you're looking right now, you know, the measurement that we did internally, if you're looking at two or micro-services that we're trying to talk, they're talking directly with no service mesh. It's around two minutes a second, latency. If you're looking at with sidecar, we measure something more like five minutes a second. So again, there is a latency that it's adding. You're getting a lot of benefits from service mesh, right? I want to be very, very clear here, but you also think it will cost you something. And talking about cost, of course, putting sidecar next to every micro-services that you have in your infrastructure, that is very expensive, right? Okay, so we kind of thought about it together. I will tell the story. It's very actually an interesting story. We in solo, we're working on it for a long time, probably over a year. We basically did some, you know, I was talking in STLCon about, you know, can we get rid of the sidecar? We blocked some architectural optionals of how something like that is going to happen. And while we were doing it, you know, I met with Louis Ryan, the founder of STL in Google. And basically he asked me, are you doing what I think you do? And I said, yes. And he said, and I said to him, are you doing what I think you're doing? He said, yes. And we decided to get it basically understood at that point that we are parallel to it. Solo and Google were working basically on the same implementation. So we decided to basically join forces. And we, last Wednesday, basically announced, we're calling it ambient mesh. Now it's not a new mesh. It's STL. It's the mode inside STL, which is very important to understand. Sidecar is still part of STL, right? It's not going anywhere, but this is another mode that we basically offering that we call ambient mode. And the idea with this again, is basically to reduce the cost, to simplify the operation and improve the performance. That's basically the target that we put in ourselves. Okay, so let's understand how it's working. So basically there is no sidecar anymore. We get rid of the sidecar. The sidecar is what causing this dependency or the awareness of the application to the mesh. We don't want, what we actually want is the service mesh will be transparent to the application. Usually it's not only given by the same team. So it's very important to kind of like, make sure that it's not coupled. What we're doing is we're actually putting epoxy right now. It's called Z tunnel for zero trust tunnel. And it's one panel. And basically the responsibility is extremely simple. When the request want to fly, if the only thing that you are interested in is empty lesson and basically everything that will add it to zero trust and honestly a lot of our customer are, basically it will be very simple until fly first to the layer four proxy. The layer four proxy in that case is, can do some policy in layer four policy can give you some metrics on layer four and it's doing empty lesson encryption. It's basically created a tunnel we're using H one and basically go all the way to the destination, a Z tunnel and then it will go to the application itself. So basically again, what is important here is that there is no sidecar. Everything that we're doing here though on that proxy, it's only layer four. We are using H bone in order to do that. And it's very important that, you're getting all those zero trust encryption and everything that you need, but there is no sidecar, there's one panel. Now, this is what would be fantastic if all you want is layer four. Layer four is basically, there's not a lot of complex stuff that you're doing there. It's basically relatively simple and we feeling very, very comfortable about it being shared by the node. What we not feeling need to be shared by the node is everything that related to layer seven. And we believe that layer seven should not be a multi-tenancy. It's a very dangerous concept. And the reason is because honestly, there is the noisy neighbor problem, right? I mean, in layer seven, you potentially can want some wasps, right? You can do some very complex pieces of going to an external service. That's taking more time. And when you're running it as one share per multi-tenancy, that's a dangerous thing to do because it's mean that there's the noisy neighbor. There is some application that potentially will suffer from their neighbors taking more resources. So what we did, we created a concept of waypoint we calling it. And waypoint is basically layer seven proxy that can live everywhere. It's really your decision where it's live. We can tell you in solar that we have some ideas and we're doing some tooling in order to help people to decide where to run it. But in the nutshells, it's can run outside the node. It's can run in the node. It's very depends on you. And what we're doing, we're basically taking those layers seven proxy and we basically have one deployment bear a service account destination, which means that it's not shared. It's basically one specific for a service account. And what we did also in the STL community, we basically try to move as much as functionality as we can from the client side to the server side. That way that you will not need to, right? And basically what will happen in that case is very simple. You will still need to go to layer four proxy and create everything related to security and encryption. Then if there is a layer seven request, again, not a lot of the time it's not going to happen. It will go to the layer seven proxy. It will do whatever I need, you know, time out, sleep, whatever you want it to do. And then it will go to the layer four proxy and directly to the application. So that basically the idea of how we separate those layer. Now the beautiful of it is that honestly, when we have a lot of people who are adopting service much what we see is that they first usually, a lot of them only want the zero tracks. So honestly, this is a very easy model to actually adopt that. And then incrementally, they want to add more and more feature. So this is a very nicely way to do this. And I don't think that we notice which is very important. Look, we're working with SDO, with Envoy from the beginning, even before we work with SDO, so over five years. And we went and look at all the history of the CVS that happened in Envoy. And what we discovered is that there were all lifetime of SDO, there were two CVE that are happening in layer four. But every other CV is happening where the complex things happen, which is layer seven. That way by even taking it out of the node, maybe seriously detach it, it's giving you a lot of opportunity because that way you don't need to upgrade the proxy so often on the node that we do in charge of the zero trust. So that's kind of like very high level on Envoy. Now, what is the advantage of there? It's pretty simple, right? First of all, we get rid of all the cycle. That's by itself, it's a huge cost-saving, right? This is basically another picture that's showing the same thing. So in so we actually did a blog that actually trying to calculate. We did an open source project that basically showing what is the difference between deployed that way, the cycle model versus ambient in some cases that we try and we saw a huge improvement in terms of how much you need to cost per resources. So again, we recommend you to go and read it. But again, the idea is that it's extremely saving new resources and cost money. The other thing that is very, very interesting and this is only a very high level illustration of this. It just wanted to make a point and hopefully I will successfully doing this. What I did is like, look, layer four usually it's very simple and very quick. Usually it's almost sub zero. So I put here as a kind of like measurement as 0.5 millisecond kind of like latency. When you're looking at lower seven, of course it depends what you're doing but kind of like let's do the average is around two millisecond. So what I wanted to show here is that if you're not using any mesh, of course you're not using any latency. Doesn't matter, right? There's no proxy, there's no latency. If you're using the regular cycle model, layer four, you will have two layer four proxy in the client and the source and the destination. Therefore it will be a two layer four. It's around one millisecond. If you're looking at ambient, it's exactly the same thing because you have two layer four, it doesn't matter. It's exactly the same thing. It will be the same time. If you're looking at the regular cycle model for layer seven, you will have two layer seven, right? So that's will be around four millisecond. But if you're looking at an ambient, we basically tweak, we took one of the layer seven for two of the layer four, it's still faster. So we expect it to be around three millisecond. We have some measurement and we will be able to share it with you soon. So in the natural, even in terms of performance, we should see even though there is another hope we should expect to see some reduction. And the last one, which is in my opinion, the most important thing in ambient, I will say it again and again, working with hundreds of customers with a different environment, I'll tell you operation is not that easy. And it's extremely important to make it simpler. So what we're doing right now, and while we show us a demo very soon, it's very simple. You're applying to your mesh. The application is running there for years. You don't need to redeploy it. You don't care about it. It's all great. Then you apply the policy that you're interested. Again, it's just going to be forced when you don't want it, remove the mesh, everything is continued right. You really do not need to do anything with it. Which I think is extremely, extremely powerful. So I think that the best way instead of me talking about it, I will let Yuval show it and then we'll come back to me and talk a little bit about more about the future. Hey Yuval and we're moving to you. Thank you, let me share my screen here. So you should be able to see my screen here. I have a Kubernetes environment running locally on Kine. And you can see that I have a bunch of applications, a sleep application and a low implications that, and a low application that we use in this demo. So first thing first, let's just kind of see that everything's working as expected and we'll make a call from the hello, to the hello application from the sleep application. So this is just exiting to the sleep and you can see that I get like low P2 and low P1, a bunch of them. It's a little bouncing between the two applications I have here. All right, so so far there's no it's here installed. The pods are running, you can see they're 47 minutes old. Let's say I install it's here in ambient mode to get the demo started. All right, so run this command, it's your install and we set the profile to ambient and let's actually change to see all names pieces. And you can see it's here is installed and the Z tunnels are installed. And as I did mention, it's one per node and in this kind set of three nodes, so I have three Z tunnels and you can see that they are installed on different nodes. Right, so the Z tunnel is the L4 component that's installed the one per node. All right, so far everything installed traffic is still working. Right now nothing is going through the mesh. Everything stills as normal, the mesh is still installed not it applied. To apply it, we set a label on the namespace to it's very similar to how it's here works today. And I think that's kind of a theme with ambient we try to keep everything very, very similar to the how it's here works today. So that we're not trying to kind of make you learn new things, right? So we have a label that we apply, it's slightly different. We've declared different namespace to be part of the it's your mesh in ambient mode. All right, now when we apply this label, you can see that the calls are still working. But how do you know then it's going through the Z tunnel, right? So let's look at the logs for the Z tunnel. And let me just X out of here, open the logs for the Z tunnel, clear this out and make a few more calls. All right, and you can see that we started getting H bond traffic, which is the traffic that's encapsulated with the HTTP connect going through the Z tunnel. And it sits for the port of the pod and very much as expected. Now this is showing only one node. That's why you don't see all the calls here because only one of the hello world will show up here because the other one is on a different node. And now let's say it's going through the Z tunnel. That's great. Now let's see if we can see what's inside it. So let's open up a term shark on that node. Let me give it to permissions. Here we go. Okay, now let's make some more calls. All right, perfect. And you can see there's a bunch of noise here, but you can see that the packets going to the node are TLS encrypted. They have transport less security. And so you can see that everything is going through the Z tunnel. Everything has been transparently encrypted within the empty LS. One sec. And I just want to show the pods. And of course the demo gods made that T-max stack. I'll just kill that terminal and start another one. Sorry about that. If we'll open K9S, you can see that the pod, there's still, you know, 51 minutes old. Nothing has changed in the pods themselves, right? So everything is looking great. The traffic is now transparently flowing through the mesh. We didn't have to restart anything. We didn't have to, you know, there's still one other one container. There's no sidecar added. Everything is kind of ambient. The mesh is in the background and traffic is just flowing transparently through it. So, let's talk a little bit about layer seven. We just seen kind of the layer four level. Let's see a demo with layer seven. Oh, sorry, this code just crashed. Can you reopen it? Apologies. Okay. So, let's talk a little bit about layer seven. Let me just see into the folder and let me show you what I will be applying. So, first thing we'll talk about the resource for the Waypoint Proxy. And as you can see, we use the new Kubernetes Gateway API. And this is how we tell Itzio that we want to create an L7 Waypoint Proxy for a workload, right? So we can tell that to Itzio that we want for the Hello World service account to create a gateway for it. So, let's apply this resource, right? Perfect. Now, if we look at the plots, we can see this new Waypoint Proxy that was just created. Perfect. Now, how can we know that it's actually working, right? So, for that, let's look at this virtual service we have here. And we can add a delay with Itzio to kind of demonstrate that the policy does indeed apply, right? So, let's apply this virtual service. All right, so now we have a virtual service created. The traffic will flow from the Z-tunnel to the Waypoint Proxy to the Second Z-tunnel to the service. And now, if we repeat our curl, this time I'll just add a client command. You can see that it takes a little bit longer and should take around five seconds. An artist's 5.2 seconds, obviously, to account for the latency of the proxies themselves. So, you can see that indeed our virtual service did apply to the Waypoint Proxy and that the traffic did flow from the Z-tunnel to the Waypoint. The Waypoint applied kind of the fault injection and send it to the Hello World. We can see the response from the Hello World after the five seconds delay. Now, everything's working good. Let's say now we're done with the demo. Let's get rid of the mesh. We basically do the steps in reverse. We move it's here. So, you know, I need to apply this. Yeah, there we go. Remove everything. And you can see this will basically bring our mesh to the start, right? We can still make calls, right? So, traffic transparently moved into the mesh and then when we remove the mesh, everything was restored to normal and now there is no mesh and traffic is just flowing directly between the two pods. Right, and the application, the microservice are still running, right? You remove the mesh, they're still running, they're still old. There is nothing basically that changed, right? Basically go back. About an hour old, still one container, there's no sidecar, everything working before the waypoint was removed, the waypoint proxy. So, it's kind of all back to step zero when I started the demo. So, again, that should take dramatically the operation down. And the reason it is is because now you don't need to, every time that you need to do something before that you needed to go to the application team, tell them, hey, I need to restart, I need to upgrade. And there's a lot of communication between people. I think that basically is bringing you control and operational become extremely, extremely simple. So, that basically was, honestly if you ask me, this is the strength of waypoint. Yeah, and I'll just mention that the waypoint proxy is just a regular deployment. It can be scaled up, scaled down. You can do a rolling upgrade. It's not coupled to the application lifecycle. Exactly. So, in terms of, and again, we will have more questions and you all will talk way more on the implementation details. But in the nutshell, this is kind of like I would just summarize basically what the ambient bringing to STO, again, it's a mod, the sidecar is right now not going anywhere. This is the default implementation is the sidecar. What we're hoping, working with a lot of people from the community and from the industry that we will be able to bring ambient to production in the next six months to a year. If we will do that, then I believe that we insular believe that ambient will be the default mod for going forward for STO. So, in terms of what the advantages are said is reducing costs drastically because you basically have less proxy running there, less API calls, less, all of this is basic. All the operational calls is basically going away because you have less proxy to communicate to the STLD and so on. In terms of simplified the operation, I think that they want to talk for itself. It just really, really become easier. Again, specifically what we see a lot of this stuff that related to CVEs and so on becoming extremely simpler. Everything I've added to improve performance and of course, and the most important thing is and I think it's really, really important is the security have to be equivalent. We can kind of like play with security, security is why service mission is the first goal on service mesh and therefore the security should be equivalent and with all those benefit coming on top of it. So that's kind of like it. I would just say, well, we think that stuff is going well, we wanted to look at it. Right now, the way it's working is that if we're looking at, sorry, if you're looking at specifically at the Zetana, you will discover that it's basically right now is n-wave-based proxy. There's a good question if it's have to be. There is some implementation details there that right now we need to fix in terms of scaling because the way n-wave is working. But there is a lot of advantage of using something that we know and love. But because we weren't sure if it's overkill or not, what we basically did, we created an interface. We basically make that component an ambient swappable. Basically butter included in that right now there are n-voy, but the idea of basically you can swap it with your own implementation. So we personally believe that there is a few interesting stuff like for instance, we can potentially remove it with EVPF. Again, it's a problem because how do you do H-voy in EVPF? It's a great question. But we think that EVPF definitely can play a big game here is already. So the way today we are redirecting the data from those applications, microservices. This is called the devalvo. Two basically the site power is using IP table. We think that with EVPF it would be way simpler. So that's something that honestly Val just wrapping up right now and we're continuing it to the community right after. But we think that there is a lot of advantage that we can get by EVPF to enable basically the mesh. And that again, that's something that's top in mind for us. So, I mean, I will finish by just saying and I will let hopefully people will ask questions. We can also elaborate more without wanting Val to answer way more technical question. But my point is that we just putting it out there. It's STO, right? It's a modern STO. There's getting started. There's a lot of blog that explaining why and how we more did them than what I did here. There's a video of the demo. We so really care about the education. So what we did is we basically created an instruct free and it's on training. So you can basically go and take it. It's free. You don't need to pay anything. Just go play with us. It will make your life easier to play with us. And in the end, you can even do a certification test and if so you're getting a certification just to show that you know where it is and what it's working. This is doing the fundamental but in the future we will do something more advanced. Again, we put blogs like around but it's doing to the wallet. You will see coming from us a lot of other blogs that basically educating or how, why? How does it work with VM? How does it work with Knative? How does it work with Zero Trust and so on? So again, stay tuned. It's very important to us to educate. And yeah, and as I said, this is an open source project. Cycle is not going anywhere anytime soon but we are starting working with our customers. Do work on NBN. We believe we can get it to perform to production relatively fast. And there is also a mods that you basically can decide at what's running with Cycle and what is not. Like you can basically write it one next to each other in the same cluster potentially and basically decide what you're using. I will kind of like, yeah. So this is kind of like in the nature. Again, there will be a lot of education to do here. We would love you to try that and we would love to get feedback but it's also an open source community, right? It's part of FESTIO. So we'd love you to come and join and help us to make it better and there's already companies that jump like credit and wire it cloud. So again, love, love, love. We would help to make it better but we really believe that this is a better model and I think that it will make all our lives an adoption of service matching generally way easier. So yeah, that's what I have. I don't know if you want to add something. Yeah, we try to make it incremental. Like you said, you can use sidecars and gradually introduce NBN by this name space labels and familiar, you know, it's virtual service is still the same policies look similar. So we really want to make it easy to test it out to get started to start, you know, small and kind of grow over time and we're really looking forward for feedback. Fantastic. So I think that if we have nothing else to add we can answer a question because I see there is some question in the Q and A. You want us to take it? Sure, yeah. Okay, so I think, sorry. No, no, go ahead. Yeah, so the first question is around how to ensure redundancy on the ZIN tunnel agent on the Kubernetes worker node. Right, so the way I look at the Z tunnel I look at it as part of your data path, right? In a sense, it's like asking how do we ensure redundancy of Linux, right? In a more traditional setup. Now, the way we ensure that is make sure it's reliable. The way we ensure it's reliable is ace, you know, in general, the more code you have, the more bugs you have. So we scope down what it does and the Z tunnel only does layer four and it provides identity and encryption. So it's very well-defined and relatively low amount of code so there's less chances of bugs. And obviously it runs with Kubernetes so we can do all the traditional, you know, micropod is critical, make sure it restarts and all that but the key here is to really increase its reliability, you know, by reducing the scope and by making it mature and, you know, Linux 20 years ago crashed a lot and now it doesn't crash so much. So it's just a matter of making it more mature and making it, you know, well kind of well-tested, well-used and with a small scope you won't have a lot of bugs. And that kind of ties into a different question I see. Is there any specific reason why Anvoy is not inherently multi-tenant? So it's not anything to do with Anvoy, right? But in general, the more code you have, the more bugs you have, the layer seven, the application space is always evolving. There's new protocols, there's new ways to do security. So there's a lot, a lot of logic that goes into layer seven that just doesn't go into layer four. So that's why we're more reluctant to create a multi-tenant Anvoy environment, right? And think when you take it to the next level with WebAssembly extensions, for example, a poorly written WebAssembly extension can kind of take down all of Anvoy. So it's not something that you want on your base layer component, you know, on your Z-down component that sits on every node and responsible for all your traffic, right? If you have a proxy that your team owns, you can have your extensions there because their impact is just you. So it's nothing specific about Anvoy. It's more the separation between, you know, traffic layer four flow and layer seven flow where layer seven is a lot more complex. Fantastic. The other question that I see here is around service entry and how it will work. I will just say that we are writing blog on it right now, exactly. So we'll be able to give more data. What do you want to, I don't know if you have something to add to it? Yeah, not much to add, but we plan to make all the existing it's your features to work. Service entry is a bit complex. It's a little bit trickier like it said was going to be a blog. Yeah. Is there any architecture changes with STL control plane in ambient STL mesh versus the traditional one? So there is no two. I mean, I was just saying there is no two control plane. It's one control plane, right? It's just a different mode. But you want, maybe you can say no. Yeah, exactly. If you're a bit familiar with it's your, it's already today has two different modes, right? You can have a sidecar, you can have a gateway. So this is another additional mode where it's ambient, right? So it didn't change. It's a semi-TOD, same way you deploy it. We just added more to the TOD so it can handle ambient. Yeah, but very important is that again, it's STL, it's just a mode. There's nothing different. This is how you can actually run sidecar model next to an ambient on the same cluster with no problem because STL for him it's just another mode. It's not a big deal. Does this have similar, with Cilium service mesh, would you be able to comment on this? I think both of us definitely can comment on this. You want to start? Yeah, sure. So Cilium service mesh, let's start with Cilium kind of layer seven approach is to have a singular seven proxy per node. We already mentioned why we're kind of against it. It has reliability, cost attribution issues and noisy neighbor issues. So we're not big supporters of that. They did mention that they plan to address that, but that hasn't happened yet. And when they'll address it, we'll respond accordingly about what we think about it then. I just don't want to kind of guess what's going to happen. Yeah, go ahead. Now I will add one more thing that is very important. We really, really important to do a separation between vision, which is extremely important to what Cilium service mesh is today. There's a huge, huge gap and it's important to understand it. The vision is what you guys all know because it's being very well market. Evan said that the implementation of Cilium service mesh right now is only an ingress implementation. And I think it's very important to mention that. It's not doing a lot. It's not really integrated with STO in any form sharing besides some forking of it. There is a lot gap between what there is really there and what it can run right now in production versus what's not. I will say that when I'm looking at, personally, my customers and our users, I will say that our customer really likes service mesh and they're feeling that it's really, really good. They're excited about using it. They want a mature one. It took us a lot of time to build one that everybody can use in production. And our customers are not very interesting of waiting five years to have Cilium service mesh mature. And right now it's very, very far from the abilities and security and the maturity that STO have. Yeah, I mean. Let's see. Where can we seize it on a cold and let it see BBF efforts? Okay, so currently it's on my laptop. And once it's ready, I will make it PR against the public ITO repositories so just be on the lookout for those in the ITO public repo. Okay. I suppose they're at the same control plan then can learn and manage it. Maybe we're ready to go off. Yeah. You want to read it? This is a question. I believe the question is, given it's the same control plan, can we manage both ambient and the traditional ITO? And the answer is yes. You can have both, you can have both on the same cluster. We really want to enable incremental adoption. Yes. And that's also important for an upgrade or move between one to two each other. So let's say that you right now, all your infrastructure is in sidecar but you're interesting in starting adopting ambient. Maybe you don't want all your cluster to start it. You want it to be great. You want to do it slowly and safely. So this is exactly why we put this feature from the get go because it was important to us to support it on the get go. People will be able to do that. And there's a follow up question. Can there are services of ambient and sidecar communicate? And the answer is yes. So ambient uses a protocol called H1, which is an overlay network that's HTTP based. And as part of the ambient, we've added H1 support to sidecars so they can cross communicate. And so you can have, for example, one namespace in ambient and that namespace will communicate with the rest of your cluster vice versa. Awesome. So hopefully, I mean, you will like it. You like it. We really, really encourage you to join us, the effort and the STO community to basically make it production ready and have every feature that is missing and everything that the customer need. And yeah, I mean, I mean, if you have any other question, you can always ask it here in the STO community stock or in solo community stock. We also have a specific channel dedicated to question around ambient. And as I said, we will put as much as resources we can to educate them. But I really encourage you to go and try the instruct workshop. I think it will give you a good hints of how it is. And also, you know, let you, you know, get your own dirty a little bit. I think it will be very, very clear after it. So love to get the feedback. I mean, me and you are both on the slack of STO and so. So yeah, love to get feedbacks. And if you need any help, we are here. Yeah, I'll just say after working on this project, I'm very excited to see it live announced. And I'm really, you know, really want to hear from the community, you know, kick the tires, kind of let us know what you think. Yeah, it was a long few years of working on this. And we're actually pretty excited to see it live finally. Yeah, I think we, with that, we can hand it back to the Linux Foundation to wrap things up. Thank you so much. Thank you both so much for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.