 Thank you Candice Thank you very much Thank you for the opportunity to present tonight We'll be talking about something quite important, right? So our role in to making the world a better place We all part of the same human family And I think it's one of the most important effort the most important things we can do Together and we'll show you something what we think it's is helping or could help reaching the goals that we set ourselves to be To prevent disaster in in in the future. So I mute myself. Hey, so I'm Alessandro and Marino is with me So we work for a company called those solo.io. We do love Istio service meshes, of course, and We are a company very dedicated to to advance open source to advance The adoption of service mesh and and why we do that. We also want to share How can we also look at the consumption the resources that we consume with our software while we are Solving the application networking problem that that is Do you want to introduce yourself Marino before we go? Yeah. Yeah. Hey, everyone. My name is Marino Wajay I am a principal developer advocate at solo and much like Alessandro said a lot of our focus is around the open source application networking space and well why a lot of this is important to me is because I come from a network engineering background and To see a lot of the sustainability efforts make its way into cloud environments data centers and even cloud Native is is very exciting. So, you know, I hope we get to cover some deep technical details We'll also talking about the sustainability side of what we do in cloud native Yeah, exactly. Thank you. Thank you. Actually the the angle of networking. It's maybe overlooked when we talk about Sustainability of software, but we I'm confident that we will also start talking about it We always talk about compute and and big batch computation when we talk about consumption of resources when it comes to software, but Networking plays a big role and we Marino as a Track easily on this we have networking foundation workshops. We will share later on So I hope we will see more about networking impact of networking in sustainability. So this talk and Allow me to to digress a little bit. It's important for me I think it's important for a lot of people. We are here and we are part of the same the same human family, I think and It's this talk is about the choices you can make today what you can do to impact the course of of What is going on in On the planet, so we all know that we are at a crucial point in history. So what we do this Decade is gonna matter for the next second to this. So and this is the moment we all have to come together and Nobody nobody is cool that nobody is left out Nobody everybody has a Something to do and to say about this and we are we are here to talk about some of those choices I saw this very dear to me this topic because I Found this interesting Graph, this is what this was from my MIT Paper from 1973 and it was about prediction of where are we going as a species, right? So where are we going in society? And what can we do to? To build a better future, right? So if you look at this graph, you see that in many Projections this is from 1973 and I know some people have some different opinion on this and it's been of course like also partially debunked But yeah, if population grows and consumption doesn't Doesn't seems to stop of course. There is a point in time where Things stop right so that that's if we keep going business as usual. This is the business as usual scenario where we just Increment the amount of resources we consume. We don't do anything about our the consumption of Oil and gas and the non renewable energy sources. That's where we're going We gonna reach a point where there is not much to know much oil not much more oil to burn no much Resources to consume and we'll rapidly decline Population will also decline and we're not gonna be it's gonna be a bad place for everybody to be in so so what can what is about? Are we really going there? Can we do something about it? Is there a way? To do to do something about it. So yes as a Colonial complete foundation. I'm an ambassador for the CNCF been around for a long time. I consider the foundation as a almost like a family to me That is efforts, right? So this is a Item awareness of these topics. So we do have now for about a year a technical advice the group about sustainability I encourage everybody to join. It's it's open group that specifically analyze and thinks about the the consumption models and how the The software we love all love and care the open source software under the clouded company foundation Thinks about or consider The resource consumption and so on. So it's an open group. I encourage everybody to join We do meetups. We do meet every week We will have special Meetings in Cuba concert we had in Chicago couple of weeks a few weeks ago So it's an open group and I and some of the things that we discuss we talk about of course is the the software Carbon intensity, right? So how much? How much carbon equivalent is produced per unit of computation? It is the crux of the of the problem, right? So this is the mantra what everybody's trying to to optimize. So how can you emit less? While keeping the same Amount of computation we all know software is sitting in the world or has been eating the word for for decades now World runs on software. So we are not advocating for reducing the the amount of computation needed because of course We we also need that that computing power to solve humanity's problems and and to Live more comfortably, but what we are saying is as a advocate for sustainability is that that amount of computation should Should produce less carbon less CO2 so we can reach the goals that we we set up So these are also interesting course from the green software foundation about this and I encourage you to look it up and come to the to the Tag environmental sustainability to discuss about it. So what is ambient and it's interesting. That's why it triggered me Talking about ambient mesh and environment because in Italian ambient Environment is spelled like ambient, right? So if there's any time here, so when we call protection ambientale like environmental protection in Italian is spell exactly like ambient So for me, it's those two words are quite equivalent. So why am I talking about ambient? so and you can tell that we love open source and specifically in our In our daily daily job, of course, we deal with a lot of Application networking so we aim to solve some of the most tourney Issues when you deal with microservices cloud native architectures Namely the networking the security that comes the complexity that comes with the running distributed microservices and One way and our way to solve it is to use a service mesh, right? So a service mesh is this architecture that Classically injects cycle proxies very close to the application So you can control the traffic coming in and out of your of your application pods This been around for a while. That's it's what we consider like the golden standard for for service mesh Many service mesh is in within CNCF and on the market. They do use this architecture, but it's not the only one there is We say that We listen to customers of course to end users of service mesh of is to in particular and then The community came together and said there is another way. There are better ways to do the same thing the same control to achieve the same degree of control and security and and and observability of your application Without wasting compute cycles in cycles. So short intro about is to it's been around for a while Sometimes almost the same time that as as kubernetes and CF has been around It wasn't part of CNCF for for a few years, but recently it's been donated and effectively now is one of the largest and most popular project within within within the CNCF is being donated has been also Promoted to graduated. So it's a graduated production ready service mesh and recently Not even a year ago. We introduced the ambient mode for for service for is to mesh what it is. So it's a new Architecture where we get away with the psychers. So we don't deploy psychers Per application per application pod and instead we use a psych out per node architecture. So what do you achieve with that? Simplified operation. We have less moving parts Better performance. We are in the process of establishing also Baseline performance for for ambient, but definitely and this the Why we are here talking about the ambient because these less moving parts you you literally running less containers less less compute You will eventually you will get less consumption and this will save you from Emitting and consuming energy and eventually emitting less less CO2. So it will Improve the software carbon intensity. It will reduce the software carbon intensity of your mesh enabled architecture So how does it work very broadly? Say that you have a classic side car proxy architecture and you want to migrate to ambient mesh that's simple as Removing all the proxies or not even deploying them if you start from from from scratch if you have a green field deployment But having a proxy per node. So you can tell from this simple picture that you go from I don't know two four six 18 proxies to one, right? So and this will reduce considerably your your carbon footprint You can say sure that proxy per node needs to be a bit bigger Needs to handle more More connection that's for sure but being a specialized proxy and part of the Move to ambient was also to split the layer 4 and layer 7 if you know the OSI layer I'm talking about the TCP connection Which is performed by the the proxy per node Versus the layer 7 which is the HTTP Connection handling which also means the all the sophisticated layer 7 policy and and Traffic routing the consulate so when you split these two and you put the proxy on the node to do the job of layer 4 Connection that will make it more simple Who is also also enable the history community to rewrite the proxy in rust which is not obviously very efficient Programming language so the proxies now Britain in rust while the layer 7 is still Still the The trustworthy Envoy proxy, so we also have of course our own Implementation of that is based on the open source and base what we are to where we are saying here is that we replace the proxies with What we call is that tunnel which is this node-based proxy and the proxy will form a Mesh as well a mesh between the nodes, so you get exactly the same features Without the burden of running many many many Proxy psycars and so after all after this is something we we're not gonna talk about we wanted to skip this but This is the waypoint proxy which is the layer 7 dedicated layer 7 proxy that will only handle The communication for the application that do need layer 7 Endling of connection, so if you don't need it There's no waypoint proxy and you will be happy with just the zetano And if you want to do a layer 7 you just have to deploy one proxy, but it's not one proxy pair Pod per application is one pod pair one waypoint proxy pair Per per application per per Workload identity not going to go into detail here, but it's for service account really and Also helps with adoption because you don't need to restart an application. You just deployed the zetano and An Istio and you on board the application as you see fit as you move with the adoption to the Into the mesh So and of course still can work with psycars as well So you don't need to just all of a sudden Change everything and then start using mmba. It's a it's a gradual progressive adoption of Of this of service mesh. So what is what does it mean for your wallet? Right. So that you go from a cycle model where you have an in very quite intense And quite invasive as well because you got this proxy really Injected into your application And they do consume memory and and CPU of course because they even if they sit there. They do have to Endle every connection in and out of your application to our Situation where you have ambient and even if you have the waypoint proxy deployed you still have quite a significant reduction in usage. So This is important for your wallet, but also for the planet, right? So you are reducing keeping the same the same functionalities. You are still be able to To perform very sophisticated, for example application tracing or or layer 7 Processing of your of your request while keeping the resources to a minimum and while keeping of course the the resource consumption to to a low Low low amount. So this is a projection, right? So this is not It's not science, of course, but if you imagine Without any proxy without any service mesh is the blue line and you grow your application You move on and you you're successful. Your application is being adopted left and right And you start to scale up to a thousand nodes and you have 50 no 50 pod per node you scale linearly, right? So you go up Up by the factor of 50 pod per node if you have psycars Every pod also brings in and not another container with the with the envoy proxy So it scales faster than the amount of nodes required by your application by the amount of pods required by your application While with the with the ambient there is an overhead. It's one proxy one pod Per node, but it's not the same overhead as you would have with the cycle. So even if you scale your application if you go where Everybody wants to be right? So we made such a great application that everybody wants to use them We need to scale, but we we are not victim of our own success and we can still handle handle the The resource consumption of your of our application. So so that that's that That's my idea. That's that's what we wanted to share to share about ambient mesh and Please Marino if you want to take it from here Yeah, so thank you so much for sharing a lot of that information Alessandro's but before we continue there were some questions that came up in the QA and I'll go ahead and answer them live But let me go ahead and ask the questions when a node restarts. How do you guarantee that the per node? Proxy comes after the application pods if that makes sense. So one of the considerations Considerations Karim about the proxy itself specifically Z tunnel We will we don't want to consider it as a node proxy per se it's more of a remote proxy because Applications or workloads that need to be able to communicate with other workloads in the mesh just need to have a Much very very much local Z tunnel available to them I mean it technically doesn't have to live on the same node But the preference would be the same node only because it cuts back on latency and unnecessary Hops, but here's the other consideration as well if your node restarts kubernetes knows about that So kubernetes will work to actually redeploy the application before that node comes online It'll deploy that application on another node provided that you haven't created any sort of tolerations or any sort of Policies that disallow workloads from running other on other nodes kubernetes would take effect beforehand anyways And then when the new node comes online if there needs to be some sort of rebalancing effect Then kubernetes could effectively take care of that as well The second question is can you please explain? Per what exactly namespace service account or a link would do the job. I'm not Entirely familiar with what the question is if I understand it correctly though is I think you're trying to know or Understand the association of Z tunnel and the service itself So Z tunnel can be Accommodating of multiple workloads that need to communicate with other workloads across the mesh So it could it could assume the identity of other nodes inside of the mesh as well provided that it's a Resource a source resource or a source workload That's sending data That's sending data that source Z tunnel will pick that up assume the workload based off of its service account and then form MTLS over to another Z tunnel at the destination side So that's what the linkage is next question in Z tunnel in ambient mesh and the gateway pod in multicluster kubernetes both build encrypted tunnels What one on layer four is still using MTLS the other one on layer three multicluster Do you think both communities could work together to? Have just one pod doing both depending on the use case Okay So this is an interesting one because you're addressing two different layers of a network You're addressing a layer where your your data plane needs to be constructed So your nodes in a multicluster environment are forming that data plane Specifically for your workloads like pods or applications or services to run So you have this this node network layer, which is what you're trying to provide connectivity for in a multicluster environment which can be achieved through the usage of tunneling mechanisms for example like IPsec or some SD-WAN technology or some extension technology or when technology for that matter and you could almost treat like all of these nodes together as If they're in a single supercluster The reality is they could be all independent of each other, but they have IP reachability Once that's established the layer on top the layer above that is the application layer where Applications need a fabric to make service calls to each other is where the service mesh comes in now The two layers actually do work together already It's just a matter of how you perceive what your network looks like whether it's exposed publicly Or if it's a private network that you manage at an enterprise level so there's a lot of different answers to Do do we think both communities could work together because they already are I mean if you think about it the cloud Environments already do this today and provide some software to find networking So to speak across multiple clusters of environments And then you would run a service mesh on top because your services need to find paths to each other And they need to communicate to each other and they also need to attest each other based off of their identity So I hope that answers that question And then Karim follows up with this is for the way point proxy How do you decide how many way point proxies you use exactly is it per app or per name space? It's actually per workload So you could establish a way point proxy based off of the workload that you're trying to provide some sort of layer 7 Intelligence or some sort of canary deployment towards You can think of it as deploying that layers of an envoy except in the sense of a way point Just to be able to do these advanced layer 7 functions even layers of an authorization If that's something that you're after as well Now how you decide that is based off of the needs of your workloads This could be many and I understand that this could actually become a bit of a scale issue and at that situation I mean ambient mesh at that point is a discussion where you have to have with a with an enterprise Environment or enterprise Offering or a commercial offering to help you address how to overcome some of the open source limitations and ambient mesh Because ambient mesh won't be able to scale the way you you hope it would and you would need something like what Maybe glue mesh would offer or glue platform would offer to be able to achieve that level of scale Okay, let's move on to Gateway API because actually it's another form of sustainability in my opinion And the reason why it's another form of sustainability is if we actually look back way back to when we do ingress and actually Alessandra if you can move to the next slide if we look back to the way we do ingress The way we did it before is we'd actually create a service of type load balancer or node port or maybe cluster IP And if we had multiple services, we'd have to consume multiple load balancer public IPs Which actually is is a is an exhaustion on an available resource IP addresses IPv4 addresses are very very limited And which is why we're trying to migrate and navigate towards IPv6 But that that conversation that bridge is a very long bridge I mean, it's not like we can cross it immediately and overnight. There's so much that goes on In the public space in the public internet in the private space in private networks and dual stacking You know is a possibility But there's also the level of comfort like getting folks to be comfortable with working with IPv6 is a whole another Conversational together anyways, so to overcome a lot of these circumstances We we have technologies like net or network address translation that helps with this But the reality is to make this much more sustainable We have to move away from having to Statically plug and pin things to other other artifacts, right? so we don't want to statically create a service and then Basically pin it to a particular load balancer or an IP we need this to be a little bit flexible So I'll send her if you can move over to the next slide If you think about how we overcame this well the ingress resource presented itself as a as a capability where we could overload a single load balancer with multiple ports and respond To different services on those different ports or different backends or paths, right? So it was a great way to handle something like this, but the reality is Ingress in itself added another layer of complexity Because the way we would consume a load balancer isn't always going to be the same across every single Provider, so you have to layer in annotations and annotations become very Sprawly so to speak so you have annotations and a lot of your different Ingress configurations to decide and dictate how that load balancer should accept and listen in for traffic So there are ways to overcome this obviously and the way Istio did it was through the Istio ingress gateway where you you know You had this gateway artifact and you would create virtual gateways to expose different services Into the mesh From the outside world and this is how it overcame the ingress challenge the ingress challenge of using annotations and the the nature of having to maintain those over time So now if we actually think about Sorry, Alessandro if we could move to the next slide We really think about sustainability the way around this is if we don't want to have to deal with annotations We can deal with gateway classes instead and conform to a common standard that all ingress providers would conform to So what does that mean? I mean that means that Kubernetes will present a common standard called the gateway API standard Which means that you could define the way you expose your apps or your HTTP routes and inject things like TLS and whatnot in a certain way That is consistent regardless of the ingress provider that you choose or that you use So for example if today I decide I want to use a basic ingress controller like engine X But it offers up the gateway API functionality and down the line I decide I want something a little bit more advanced Well that gateway class offers the swap ability to say today I will use engine X tomorrow I will use Istio or the Istio service mesh and specifically maybe even ambient mesh Now what what actually changes here is the way you would approach how you expose services So Alessandro if we can get to that next slide So it's the way and structure of how we think about this So we have a gateway class and that is tied to a particular role within an organization Who decides and dictates what infrastructure should look like meaning I'm going to pick and choose Kubernetes as my container orchestration and API system and then simultaneously I'm I'm going to pick something like Istio as my service mesh that's going to provide gateway functionality and Then you have some other personas that exist there to decide how These these artifacts can be consumed and then what is actually truly consumed for example the cluster operator will say Okay, I'm going to spin up a gateway that says and exposes Services listening in on this domain name while an app developer says okay This is the specific service and this is what I want to be listening in on based off of this HTTP route So let's actually look at this in an expressive format and see what this looks like next slide, please So if you look at the traditional ingress resource We have You know a name some standard metadata We have a host which actually is our my dot analytics example calm domain We have a path that we're trying to get to any particular port number You have another path as well, and then another port number that Ensures that we get to that right destination But this is specific to one ingress controller Meaning if if that limit is limited in functionality, and I decide to swap I actually have to make some changes to make this possible So that actually moves over to this new format where we have two resources that are spun up the gateway resource Which effectively is is allowing us to decide who is the gateway provider? So it could be engine X or it could be Istio as the gateway provider and then additionally helping us decide how We're exposing these these services inside of our cluster Right, so for example this back-in-service food service is going to be exposed through this gateway Which we've created up here called gateway one right in the in the food name space And the interesting thing is it's going to listen in on port 80 Which is very much like what we're seeing here, and we're listening in on port 80 except I mean, this is not the exact same example But we are listening in through one gateway and we're we have a route an HTTP route to know where that that service exactly lifts food service to kubernetes service So next slide please now Gateway API itself is still in development. It went GA in early November So there's a lot of different capabilities that is still being worked on right now You can you can actually leverage the HTTP route functionality There are some experimental resources like the TCP route and UDP route as well as TLS and gRPC routes But these are still a work in progress. There's a lot of development that is underway. So what does this mean, right? I decide I want to deploy a gateway that is originating from Istio So I have to tell the gateway class is it's going to be Istio When I specify my gateway resource, it's going to pull that gateway class And then I'm also going to specify the HTTP route that exposes whatever service in kubernetes that I want to expose and That leverages this gateway or specifically the Istio gateway to be able to reach that resource So there's a question here Can the gateway also handle egress as well or can gateway can handle egress also? So actually gateway API, I'm not sure can it handle egress. I Don't believe it can specifically if you were just using gateway API as a resource. I don't think it handles egress It's more for ingress Specifically in Istio. However, if you deploy the egress gateway, you can you can do egress functionality. I Yeah, I agree with that. So this Using gateway API doesn't mean that you don't have all these to your resources that your disposal, right? So They keep working. They just end up by the Istio controller this control plane So, yes, there's no egress as far as I know Within the spec of gateway PI, but because you are using Controllers like Istio, then you have all the power of Istio that you can still use But you have to use Istio specific resources. That's it Exactly. So That that's exactly on point there's a lot of work that needs to be done on the gateway API front for for it to be able to handle egress as well and So that's answered and then another one from Korean This seems to be a replacement for Kubernetes ingress But how is this related to Istio gateway as it ties to virtual service? So actually this is interesting and I'm going to show you a demonstration about this So Istio does have this concept of a virtual service And it also has the functionality to support the HTTP route, which is essentially a virtual service Now interestingly enough Yes, while gateway API is a replacement for Kubernetes ingress or the ingress resource The Kubernetes ingress resource will still be available for some additional versions and eventually will become deprecated Ensuring that folks move over to the gateway API spec now Having said that, right The virtual service is a mechanism in Istio that allows us to expose a Kubernetes service So it allows us to overload a single gateway in multiple ways Or even expose services to other services inside of a mesh in a unique way The Istio gateway specifically, right, Karim the Istio gateway specifically Is part of the gateway class So it's a gateway that you would specify in the gateway class That that gateway class can be something else as well Like there are different kinds of gateway classes and In fact, I can even Hold this up. I'm going to do a quick Google search. I'll go look at the Different gateway classes inside of Kubernetes docs and hopefully we'll find something inside there But there are a few providers out there that support the gateway class at the moment Um Anyways, we'll we'll come back to that and we'll revisit that now having said that Let's actually go take a look at the demo because this actually allows us to dig into how we can use the gateway API as well as ambient mesh So alessandro, if you don't mind To stop sharing your screen for a second and I can go ahead and share my screen Perfect Let me know if you can see that alessandro and everyone else Perfect So I'm also going to link to a document. I hope that everyone can see the chats and If you cannot Let's find another way to share that document But the document actually calls out the different implementations Of gateway api either from a service mesh standpoint or other kind of node or gateway standpoint So you should be able to better understand who out there is providing these services. For example Istio is a provider. There are other providers out there and in fact if Alessandro if you want to be able to link to some of them, you know, feel free to I wish I was just answering a question. Yes Perfect. Perfect. Okay. So what do I have here? I have a simple kubernetes cluster. So boob CTL get nodes dash a Kind nothing special And I'm also going to just enumerate all the pods here dash a And we can notice that we've got a bunch of different pods So I'll briefly explain some of the more critical ones because the rest of them Don't really care too much about right now So we have istio d. I mean, we're all very familiar with istio d. It is the Control plane for istio the istio service mesh. So any sort of configuration we pass to it Is is actually understood by the kubernetes api Istio d will process those configurations and translate them into kubernetes resources for us So that you know a lot of the heavy lifting is automatically taken care of now We also have two new artifacts. In fact, let's get pods in the istio system namespace and just look at them a little bit more carefully Let me just a little bit bigger so that y'all can see Now We have one called the istio cni and we also have one called z-tunnel So as alessandro previously mentioned the z-tunnel is the quote-unquote node level slash remote proxy That is handling requests on behalf of any applications meaning I have a number of these applications And if they need to make a request outbound I mean, I only have one node right now But if they need to make a request outbound Then what would end up happening is it actually communicate through this z-tunnel And then the z-tunnel would form A tls or an mtls base tunnel towards another z-tunnel where the workloads get terminated actually Ideally we'd want to have multiple environments to demonstrate this functionality But that's our multiple nodes. I should say so that you could see some of the other Z-tunnels pop up, but that's okay Right. I think for for what we want to describe here We also want to get into a little bit of the gateway api, but if you notice something very interesting You look at the pods here in the test namespace actually if we do a kubectl Get pods And test And I will answer you in a second kareem Because I want everyone to to hear some of my thoughts around that If you notice here, I do have Istio running In fact, all of these if I do a kubectl get ns-dash Show labels I do have Istio running If you also look at the test namespace, I do not have a Configuration yet for putting these nodes in the mesh, right? They are putting these pods in the mesh and actually there's just one simple configuration that makes that possible It's a simple label And it's basically just telling ambient mesh or istio d that when it detects a namespace When it detects a namespace with the ambient mesh label Then It's a part of the mesh It's a part of the mesh Because what's interesting is if you think about If you think about ipsec for a second ipsec is a very You know kind of pseudo interesting technology But also in a lot of ways we still use it We use it because You know, we still need to connect nodes together, right? We still need to connect clusters together now if I if I proceeded to Want these workloads, right in the test namespace to be able to be part of the mesh and be mtls encrypted by ztunnel Then I need to label the namespace with istio.io forward slash data plane mode equals ambient The moment I do that and if I hit up arrow again, right? Notice that that test namespace has the ambient mesh label Which now also means That all of these workloads are part of the mesh Do they have sidecars? No, they don't have sidecars So now anytime they make any call To another service in another cluster or another environment or another node It should try it should basically traverse the mesh or traverse the z tunnels now What's interesting is within a mesh or within a node. I should specifically say these workloads won't specifically Have to traverse the ztunnel So all right, let's take a look at exposing some of these services Now if I Actually, that's not what I wanted to do There we go So let's take a look at this. I'm actually sharing the mechanism in which I would expose the web api resource and if you notice I would have used the istio ingress gateway to do that, but I'm actually using the gateway api spec And if you notice the api backend is a kubernetes api backend, it's not It's not an istio backend But if you also notice here, right? It's being deployed inside of the istio system namespace And the gateway class name is istio Which means I am telling kubernetes to use istios ingress gateway to listen in for requests Coming in to istio explain.io on port 8080 now A karee master earlier about a virtual service resource and it's association to the istio ingress gateway Well, I'm still using istios ingress gateway, except I'm using the gateway kubernetes resource or the gateway api resource and I'm using another kubernetes resource called the http route, which is Exactly almost verbatim the virtual service. I mean, there might be some slight variant very variances here around spacing and like the ordering of stuff, but quite honestly It looks almost the same, right? I have a host name that I'm listening on And I have rules to tell me how to match this. So right now I'm just going to any back in path but specifically If I'm listening in on this web api gateway that I specified up here and I need to I need to go to a particular service. I've already defined it here as a route, right? The service is web api on port 8080. So the moment I deploy this I should be able to respond to requests on port 8080 for istio.io So let's go ahead and deploy this. So I have this somewhere here That's been deployed and I also have my curl. So to our curl Actually before I do that Okay, and then if I do an echo of my Gateway api, I just want to make sure that there is Perfect And then we're going to quickly curl and this should work And it did so that's gateway api an action Actually, so what we just witnessed here is that if I did a kubectl get gateways, right? There's nothing there because I didn't specify the namespace if I did a dash and istio system right kubectl Get gateways dash and istio system. This is a kubernetes resource This is very similar to an istio gateway resource, but this is the exact same definition You're listening for requests on a specific host on a specific port And That's going to be associated with a route that route once it receives those requests is going to direct it and wire it Directly to the service that is in kubernetes So now the other thing we want to do is get the http route And there's nothing in the default namespace if I do tests dash and test. I think it should be there But I think that's where I deployed it to and there it is So there's my http route, which also in a lot of ways resembles the virtual service If I describe it right kubectl kubectl describe route web api api Yeah, dash and test There we go So that's the actual web api http route quote-unquote virtual service if you were in the istio world But very similar format, you know, we're we're using the gateway The web api gateway to listen for requests And we're directing them to the web api service So That's gateway api in action now. Why why is this sustainable? Why is this important because later on down the line if you decide not that I would never want you to leave us right leave us to you, but let's just decide there's something that Is a little bit different or maybe you decide you want to migrate to To glues enterprise offering and use theirs. Well, the gateway class can be swapped to glue at some point And that makes transitioning easier to a much more enterprise ready solution So there's a lot of ways you can think about a glue platform offers all of this by the way Blue gateway also offers the gateway api functionality Although is very much in lockstep with open source on boy at the moment I'm not istio. By the way, if you just don't want istio Well, there's blue gateway, which does focus on on boy specifically But there are ways for you to start thinking about how you can onboard the gateway api into your environment In a very sustainable manner without you having to rewrite and rebuild a whole bunch of stuff So having said that I'm going to pass it back to alessandro so that we can summarize and wrap up And open up for any sort of qa Thank you madina. Uh, yes, and there was this cni Cni question that I answered already Thank you for for that For the oh, yeah, actually, sorry I'd love to answer that too if you don't mind so the istio cni I wanted to go into a little bit more detail about that and I kind of overlooked it The istio cni is used in ambient mesh to help with redirecting The traffic and identifying the traffic that needs to be ambient or mesh bound specifically So for example, if I have service a trying to communicate with service be on another node and it's mesh bound Well, the z tunnel does the carry of the traffic carries the request and it forms the tunnel But we have to find a way to navigate that traffic towards that tunnel. So this is what the istio cni is doing It's navigating that traffic towards the tunnel so that it can be encapsulated and then sent over the quote unquote wire It is not a replacement for the cni that is required To run kubernetes and run pods inside of kubernetes and get ip addresses in an ipan fashion Yeah, yeah, in fact in fact cni's are You can multiple cni's they are in Chain together. So that's exactly what istio cni does is it gets in the chain and just for those pods they are part of the mesh Manipulates the ip tables if it's that's what you're using or ebf ebpf programs if this is what you're using so that would be Would be great. So let's conclude. I was a great great demo. Thank you um To go back to of course the choices that we all make You do have a choice. It's not i'm not saying that you have to go out out now and adopt service mesh in particular Ambient mesh right today But you know that everything every choice you make even not making a choice even don't Uh not caring or not postponing is a choice in itself. So it's um, so we want to say There is a better way to think about this in fact, there are other Projections that instead of uh, uh, I think um asymptotic expansion and consumption of resources Kind of levels out and we can learn how to Do more with less Do more don't stop progress and innovation but still Make sure that our kids have a future and And all the unborn future generation have Have a great place to be which is a clean Efficient planet where we can all live together in peace forever So how to contribute to to open source? And to specifically to the tag environment sustainability joined joined the meeting joined the meetings find us on um on slack and Bring your ideas. We that's all we need. You know, that's uh At least that's where we start from ideas and contribution and participation So there are other projects you should check out the carbon aware keta operator We'd love to see this also working together with with istio. Maybe we will will Uh, have some more webinars in the future But it's a way to understand the the carbon intensity for your for your covenants clusters and Be do something about it. Very interesting project We as a community we all love to go to kuba to kuba con and we have a new initiative for Well, reducing the impact of our travel kuba train dotio It's where you can find more information on part of that. So we all look forward to join uh kuba con in paris But without polluting and without flying Europe allows people to to move by train So have a have a look if you want to know more about ambiance for sure check out free resources on our website And join the istio community course jens that been recently started again and Yeah, please join the community of istio. It's we all need your your your contribution So academy you want to have more ends on on this. Please be part of it. It's it's a You get nice badges to show on your linkedin and Uh, and your your profile we have a slack So we always welcome people to come find us there But also on the cncf slack and I would thank you For being here today and madino of course and and kandice and samanta Yes, thank you everyone for your time today really appreciate it If you have any questions for us, you can find us on slack slack dot solo dotio and again a lot of a lot of this Is available as hands-on at academy dot solo dot io as well What's interesting is that little environment that I was showing you is using instruct It was also the same ambient mesh Lab environment that that is also available to you for you to be able to test out how zeta works how waypoint works Understand the routing behind it and then even work with gateway api And there are some other things too as well as network foundations if if you need a little bit more understanding around how tcpip and dns and any ttp function inside of kubernetes But again, thank you so much everyone for your time today. We hope to see you again on another webinar. Take care. See you Thank you so much alessandra marino for your time today And thank you everyone for joining us as a reminder this recording will be on the linux foundation's youtube page later today We hope you join us for future webinars. Have a wonderful day