 All right. I think it's time for us to get started. I'd like to thank everyone who is joining us today. Welcome to the CNCF webinar for today, which is using Envoy Proxy as your gateway to service mesh. I'm Mario Loria, a senior DevOps engineer at StockX and a cloud native ambassador. I'll be moderating the webinar today. We want to really welcome our presenter, Christian Poso, who's going to be doing most of the work. He has a global field CTO at solo.io. Just a couple of housekeeping items and we'll be on our way. During the webinar, you're not able to talk. We have everybody muted. There's a Q&A box at the bottom of your screen. Please feel free to drop questions in there and we will get to as many as we can, both throughout and mostly at the end of the presentation. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that could be in violation of the Code of Conduct. Please be respectful to all of your fellow participants and presenters. With that, I think I'll hand it over to Christian to kick off today's presentation. Thank you very much. Well, let's get going. So we're going to be talking about on-board proxy and service mesh in this webinar. Now, these are two technologies that have emerged out of the cloud native ecosystem in the last few years. And they promise some powerful features and capabilities that should help us. Now, we're going to look in this webinar at some practical ways to adopting this technology and making some questions and things to be thinking about as you go down this path yourself. So my name is Christian. I'm a field TTO at solo.io. I've spent a lot of time working with enterprises on modernizing their application infrastructure on building toward microservices architectures and overall becoming more successful at delivering code faster and learning from your modernization efforts as well as learning from your customers when you get those coachings out there. I've written a few books. I'll plug one of them in a second. I've been involved in open source for a long time now. Spent time at Red Hat, at large banks, at internet companies who, 10 years ago we're doing and using this technology to build their services architectures and now they're kind of the poster children for the movement. But although you can't speak, like Mario mentioned right now, the lines are muted, do reach out to me at any point after the webinar. If you'd like to discuss or debate or compliment, reach out to me anytime. So I wrote the first book on Istio, which is a service mesh almost a couple of years ago now. And currently I'm writing Istio in action for Manning. And that book is currently in an early access preview. In full transparency, it kind of stalled a little bit, but I've just recently added a co-author and we'll announce that hopefully next week and hoping to push to get the book finished here. So why you might be interested in these topics, you probably have some experience with technology that connects applications. But as you go from monolithic style deployment architectures and application architectures into both cloud infrastructure and by that, I mean infrastructure that is ephemeral and that can elastically scale and can be provisioned on demand. So when we start to build our applications to take advantage of these infrastructures, we have to think differently about how we build the applications and what the architecture looks like. And when you go to whether this is your on-premise or into your public cloud, you want to find technology that allows you to solve some of these challenges around the ephemerality and the scaling and so forth. You might and probably do already have existing API infrastructure, whether that's API management, whether that's enterprise service buses, whether that's messaging queues and so forth. These are the things in the past that we've used to connect our applications. You might be interested in this because you're going to in the middle of a cloud infrastructure adoption, whether that's containers, whether that's public cloud and so forth. You're finding it challenging in a heterogeneous environment to get insight and collect telemetry about what's happening between the applications. As we move to services, communicating more over these cloud networks, there's more complexity in that interaction and we need to be able to observe it and understand it. And we don't all get to just say, we're moving everything to containers and to cloud and it's a nice greenfield project. We have existing investments. We have existing infrastructure that's not cloud and we need to be able to solve the problem of some of these applications being in multiple different deployment targets. And then ultimately we can't compromise our security and our security posture by adopting these new technologies and so forth. The organizations that I've met with, and have seen this over the few years now that I've been helping folks with ServiceMesh, what they end up seeing and their current deployment architecture is look something similar to this where they started off with an API management solution. They were going to build APIs, expose them to their partners, build this API economy and so forth. And that went slower than they thought. And so what they decided was we're just going to build the APIs internally and we'll expose those between our business units and so forth. And so we need to solve for things like the connectivity problems, like security and rate limiting and telemetry collection. What they ended up doing was forcing everything through these centralized gateways. Now for partly some of the reason of the centralization, folks are saying we need something to be able to go fast. We need something that is a little bit more cloud friendly. Oh, I heard about this ServiceMesh thing. That sounds really interesting. Let's just go all the way. We're just going to try to get rid of the centralization, go all the way over to what looks more decentralized where a service proxy or the data plane gets embedded with the applications and the applications talk to each other through these different proxies point to point. So we're not going through a centralized plus or a centralized hub and the applications are allowed to and they're interacting and talking with each other directly. Now this is a reasonable approach. It doesn't come without its own challenges though. So some of the challenges of adopting ServiceMesh are, well, do you really need one? Let's start with that. So you hear about this ServiceMesh thing. Your existing API infrastructure might be a little bit more decentralized where it goes too slow, not cloud friendly enough and we're going to do a ServiceMesh but do you really need one? ServiceMesh is complex itself. Oftentimes you're layering it on top of cloud infrastructure that you may not fully have mastered. So you have to ask yourself some of these questions before getting started with going down the path of a ServiceMesh. Are you going to realize the value of solving these problems more than you will incur the pain and is it worth it? So you have large deployments, a lot of services, service communication, a mix of different languages and frameworks. You're struggling to consistently implement observability at the application layer and then of course, like I said, if you're going to adopt this next level of complexity then make sure that you've squeezed everything out of the existing cloud infrastructure that you've put in place. Some of the challenges of adopting a ServiceMesh, right now there's kind of a ServiceMesh, each individual vendor is trying to build the one that will win. So which one do you choose? You don't want to pick the wrong one. Who's going to support it? It's very difficult to manage multiple clusters of these things once you start to implement them. They're not always as transparent to your application as you'd like or as the creator of the ServiceMesh would like. And once you get it into your environment, who's going to run it? Who's going to own it? And what are the processes that are going to look like on top of it? That's a lot of stuff to try to tackle and that similarly with Kubernetes and containers and so forth will bring a lot of people to the table and the adoption of this technology because of that will not be very quickly. It will be slower and methodical and thought out and everyone will have their opinion and so forth. So how do we get there? So if we accept the premise that if you do have these problems and the ServiceMesh in a decentralized way nicely solves some of these problems, then how do we get there? And the approach that I've been advocating now for the last few years and that we've seen successful is the start small and incrementally adopt new features. Start with a minimal piece of the ServiceMesh and master that and then start going from there. The approach that seems to be fairly comfortable for most people is starting with something they already understand, starting with a gateway, starting with a gateway that is suitable for a cloud native application and cloud infrastructure. As a ServiceMesh as we saw in the previous slide is made up of multiple one of these proxies that end up talking with each other. That component, that proxy is very important. All the requests are flowing through that proxy. So when you start, you don't start with 100 of these proxies, you start with one, you start with two and you start to understand the technology that makes up that data plane proxy and how to operate it, how to debug it, how to pull the logs from it and so forth. And starting with a gateway approach at the edge or boundary of your architecture, again, like I said, it's something people are familiar with, are familiar with that concept. And once you become successful with this data plane technology, then you can start to roll it out into multiple parts of your application. Now, this starts to kind of hit at the, the notable differences between the traffic coming into our cluster, which is at the edge and is what we're talking about here, starting with the edge gateway. Some of those concerns versus what the ServiceMesh will ultimately help you with, which is the East-West or the service to service traffic concerns, right? So if we're looking at this through the lens of, well, if we start with the edge gateway, the problems that we'll need to solve are how do we get potentially untrusted traffic into our cluster and do that in a way that complements any of the ServiceMesh technology we might use for East-West. Now, if you look at some of the functionality in a ServiceMesh, things like traffic control and traffic routing and some of the resiliency aspects for making connections and load balancing and service discovery and circuit breaking and so forth, there is some overlap in those problems that you need to solve at both the edge and within the East-West traffic, within the service to service communication. And so from that perspective, the North-South and the East-West distinction for some of these challenges will overlap, right? And so we would expect the technology that we pick for the edge and for our ServiceMesh to be complimentary. We know there's gonna be overlaps we want them to be complimentary. However, there are things that you start at the edge that you do need to solve for traffic coming into the cluster that you may not need to, in East-West in the service to service communication. So it's a different superset of problems at the edge of the boundary of our cluster or deployment unit or architecture or however we wanna describe it. And we'll look in a little bit about some of these deployment patterns and some of these architectural patterns that and where to draw the boundaries and so forth in just a little bit. But the concerns at the edge, things like various security, authentication and authorization components, things like an application firewall, web application firewall, maybe some very specific security plugins to tie in and be backwards compatible with your existing investments. Something that within your cluster, the East-West traffic might not need to do it. So that's where Envoy comes in. So we have this, we wanna go to a decentralized application networking architecture. We want to start iteratively and maybe start at the edge. We want technology that is complimentary to both the North-South and East-West traffic. And we know at the edge, we're going to need to solve problems that we might not need to in the East-West space. Envoy fits this solution or fits this problem space very well. And now Envoy's been talked about a lot recently. It has become very popular for a multitude of reasons, including the fact that it's been adopted by some large web companies. It is adopted at scale in large production use cases. And I think most importantly, the community behind it has grown, very vibrant, very diverse. And the project is very welcoming changes and so forth. So we have a nice thriving community. Envoy is an implementation of a level seven proxy that understands how to collect telemetry. It can help with connection load balancing, can help with service discovery and traffic control and traffic routing and so forth. So then these are things that we need. These are features that we need, whether in a service mesh, when talking to other services or at the edge. A logical diagram could look like this where you have Envoy proxy as a mediator between your traffic. And you can do interesting things like route traffic between maybe your monoliths or your various services in the back end and very finely control the traffic based on versions, based on various other headers, based on potentially even the body of the message. So you can do some very fine-grained application routing within Envoy. It's very important that Envoy is collecting this telemetry so that you can understand how many requests are coming through and how many failures, how many circuit breaking events, how many connections are closing and so forth. Now, especially if you start to think about Envoy at the edge or the Envoy at the boundary of your architecture, you start to think about decoupling what's running on the upstream side or the right-hand side of the proxy versus who's calling into your architecture. You might want to expose services as JSON over HCP or REST and your internal services might be talking GRPC so you need some kind of bridge there or the API shape itself that you expose to your users outside of your boundary might be different than the APIs that are actually being used upstream. And so you need that decoupling point. And so for the ability to take Envoy and run it at the edge to give you API decoupling along with traffic shifting and telemetry collection, distributed tracing and so on, we at Sella, we built an open source product called Glue. Glue is an edge gateway or an API gateway built on Envoy and it runs natively in Kubernetes and implements this API decoupling or transformations of your requests. With, we saw that slide earlier with the capabilities of solving challenges at the edge. You might need OAuth or you might need some custom authorizations, caching and rate limiting and these sets of things that you would expect from an edge proxy. Glue provides kind of a packaging of capabilities that they saw this out of the box. Glue is basically the control plane for Envoy. The control plane was built very specifically to be extensible, to be able to plug in additional capabilities. We didn't try to build a one control plane to rule them all. But we knew coming from our enterprise background that is if somebody's gonna take Envoy and run it in their environment, they're going to need a control plane and they're gonna have a bespoke typical enterprise environment where it doesn't nicely fit into somebody else's the way they imagined the control plane to be. So what we said was everything's gonna be plugins. The control plane basically would be a loop that loops through various plugins and then ultimately derives and builds the configuration that Envoy needs and leverages Envoy's XDS or it's dynamic APIs for configuring the proxy at runtime. So Glue can support running in Kubernetes. Glue can support running outside of Kubernetes using console as a backend. It can discover services that have been registered in any type of service registration catalog and feed that data those endpoints into Envoy. So Glue as an edge gateway is very complementary to Service Mesh and we've helped people adopt Service Mesh incrementally by going through Glue, by going through a single Envoy proxy or a small subset of Envoy proxies that solve real value that provide a stepping stone to Service Mesh and ultimately complement the Service Mesh once it gets into your architecture. So with that, let me jump to a quick demo. I'm gonna show you what Glue kinda looks like. And then the next section when we get back to it is, all right, if we're gonna start with a gateway, then how does that architecture change when we start to grow the architecture, right? Cause one thing we don't want is to end up in the centralized gateway model that we had before, what we wanna do is provide a path to growing this architecture as well as being complementary to a Service Mesh. So that'll be the next section, but let's jump into a very, very quick if the demo got to a cooperative, very quick demo here. So the first thing that we're gonna take a look at is, in existing set of services that we have running in our Kubernetes cluster, you might be familiar with some of these components. They come from the SCF Book Info demo and we see the product page, which is sort of the edge part of the application, the UI part of the application, which ends up calling back into these various services to serve the product page. So we've also deployed Glue, and Glue, like I said, is built on Envoy, uses the Envoy proxy and builds a control plane, an extensible control plane around Envoy. And just to kind of show, so the proxy's here. If we do keep CTL, let's just take a look at the proxy. If we do that, and then we say, let's just take a look at what's actually running in that pod. We can see that, yes, indeed, this is Envoy, and we passed it a config. If we were to take a look at that config real quick, let's see, Envoy, we can see we have a Envoy.yaml there. We could also take a look at that, Envoy.yaml, and then this, so this is Envoy's configuration. This is in YAML format, Envoy's configuration. The important parts of it are right here. What we're saying is for all of the listeners, all of the clusters, all of the endpoints, and routes, and so forth, get that from the control plane. And in this case, it is the glue control plane. So from here on, we'll take a look at using the proxy itself. Glue has a nice management UI for Envoy and for this system here. We can take a look at, first, we'll come back to this. It looks like an error. We can look at the upstream, so in glue, we kind of follow the Envoy terminology. Upstreams are the services that we can route traffic to. In glue, an additional capability of glue is that we can automatically go and discover these upstreams. In this case, we're looking in Kubernetes, but it could be from a different service discovered registry. Maybe you're using console. Maybe you are writing your applications on Amazon and you want to pull them from EC2 or Lambda. You can even pull those services directly from Lambda and route traffic to them. We'll click on overview. We can see that Envoy has a configuration error. We click on that. We can see that there are glues complaining that there isn't any definitions for the proxy yet. What we're going to do is add a route to the product page application. So this is one of the ones we saw earlier. It's a product page application. We want to use glue as the edge gateway to be able to route traffic into the product page so that we could actually see it. Now, a couple of things to notice. There is a glue CTL, kind of a convenience CLI that you can use to make changes to the routing and make changes to glues configuration in general. All of this configuration is in Kubernetes. So QCTL get virtual service in glues system. We can see that these are all CRDs, Kubernetes CRDs. So we could use the CRDs directly. But in this particular example, we're using the CLI. We're also going to use the CLI to quickly find out what is this URL to go to. So if we come over here and try to hit that, we can see now that we can get to the product page UI, a normal user, and if you're familiar at all with Istio and some of the demos done around the Bookinfo demo, you can see that. All right, we don't have Istio installed right now, but we do have glue installed. We're able to route traffic into the cluster using glue. If we come back here and click on over here, we should see that our configuration is a little bit happier because now the port is open, now we have actual traffic routing rules and we can use it. Now, that's all good and well, I think. But for running application at the edge, some of the things that we need to solve for are, hey, we don't know these applications of the users on the other side of this boundary. We need to challenge them, we need to add some security plus some authorization rules, maybe use OPA or policy agent to do that. But what we're going to do here is we're going to configure glue to say, if you're trying to come to the product page application, then first authenticate and authorize yourself. Let's go through and all IDC flow and once you're authenticated, then you can get through to that. And again, this is through CRDs, so you can build the configs and implement them as part of a Git style workflow as well for auditing and so forth. So what we're going to do is we're going to apply this YAML, we see that it has been configured. So we'll go back and get that URL. Now here now, if we go to that, we should be challenged by our OAuth provider, which we configured. So now if I log in, log in with my accounts, now we should be taken to our book info page. So this is just setting up the boundary of our system, augmenting it with capabilities that you would expect at the edge. Again, this is a stepping stone to get to understanding on-boy, operating on-boy in a familiar model. And then from here, we can plug Istio in or we can plug Linkerd in and then glue will play nicely with those service mesh deployments. So let's come back to our slides. Now there's a few ways in this crawl, walk, run approach to architect or to deploy these gateways. And the approach that we've seen as being successful is starting with a single gateway or logical gateway, right, not just one, you would probably deploy a fleet of these, but a single layer, a single tier of this gateway that provides a stepping stone. If you think about this as a Kubernetes cluster, though it doesn't have to be, this is just a generic boundary. If you think about this as a Kubernetes cluster, you can use on-boy or glue to allow traffic to come in, you have traffic routing, you have telemetry collection, you have distributed tracing and so forth to get the traffic into our services, right? So we start with a single proxy, start to become familiar with that. Now you might, and so the next few slides here are operational in their context dependent, but you may find that as you scale out that these various services in the backend, they have different SLAs, different isolation requirements, different load requirements, and you might find that a multi-tier setup here is will work more advantageously. So you might say that on the far left of this diagram that these are the tier one services, and we kind of want to isolate them from the rest of them. And so we want to have their own set of a group of proxies that handle the tier one services. And then at the edge, the first layer, that all that proxy does is some very, very simple L4 or if it's doing also a very simple traffic rules and traffic routing to get to the next level, which will be smarter and then provide that isolation. Another thing that we see is that when you start to build boundaries or products or domains out of your service architecture, so now we're starting to grow our service architecture, that we might want to push these proxies closer to that application boundary and allow that to serve as a decoupling point between the services outside the boundary and outside the domain and those running inside it. So if you're familiar at all with the domain driven design concepts of a bounded context, the supplies here, where you need some sort of decoupling or anti-corruption layer between the way your services inside the boundaries see each other and how they interact with each other and how they interact with the outside world. So this is a common approach to bringing on going in incrementally introducing it to services as groups. Now, as you start to push this farther down, as you start to build your application architecture and it starts to get bigger and bigger and bigger and so forth, the thinking is, well, if you just use a service match and everything can talk to everything and you solve some of these problems, but you create a new problem, right? You start to create an architecture that's very difficult to understand point to point connections throughout the architecture and there are ways to mitigate that and that is by understanding what these boundaries are and drawing the lines around these boundaries and forcing these services within these boundaries to communicate through the various decoupling points in the architecture. So now you have a domain which might be the account set of services and maybe a domain which is the claim set of services and they're interacting internally with each other. Maybe they're using a service mesh for that but then as they cross boundaries we're going through these gateways. Now, again, these are, like I said, there's fairly context specific depending on how big your architecture is depending on how messy you're willing to allow it to get you might want to draw and put some structure into your application architecture and these gateways help to do that. And then as you start to grow and adopt your service mesh as well for those point to point communications the rest of your architecture can make use of that. So, surprisingly, I usually end up talking a lot more but it seems like we're quite a bit ahead of time. I do want to leave you with some links here. Actually, let's go to this slide real quick. So, I work for a company called solo.io. Solo is a startup that is working closely with our customers and with the community to help them adopt this service mesh technology. And we believe starting with a gateway is a very safe and practical approach. Then getting into a service mesh you will find yourself needing to manage the complexity of a multi cluster service mesh deployment. And that's where we have tools like service mesh hub which will help with a multi cluster or multi control plain service mesh deployment. And that's irrespective of the mesh that you choose. So, we also see in the ecosystem in the industry that there is a fragmentation. There are new service mesh projects starting up every other week. And there are some that have captured a lot of mind shares some that are growing and so forth. So, right now we're not trying to pick a winner but we're trying to say is it doesn't. So, pick the one that you feel comfortable with explore a few of them but whichever one that is we will be able to help you manage once you try to operationalize that service mesh. And then once you have a service mesh in place that's what we believe enables a wealth of power in your microservices and cloud architecture. We don't adopt Kubernetes we don't adopt service mesh just because it's the new latest thing. We adopt it because it enables these APIs. It gives us these APIs that we can use to manage our deployments, manage our traffic, control it, build canary automation on top of it, build chaos experimentation on top of it and make it so that it's safer to bring changes out into production. And so it's all we built tools like Squash which is a debugger, autopilot which is a service mesh operator framework, BlueShot which is a chaos experimentation framework on top of service mesh to solve those more real problems of how do we make changes in our application architecture do it safely, find issues before they happen and then react to the system as it evolves. And so we're looking at this from that's the end game we wanna help people deploy applications safer but then the emerging technology that's coming out and it's coming into place we wanna help people be successful with that. All right, okay so now I'll leave you with some links to some of the open source and some of the projects that we're working on here at Solo. And I definitely thank you for your time reach out to us at Solo, me directly if you like and I believe we will have time for your questions. Yeah, thanks Christian and like Christian mentioned we do a lot of stuff in open source and it's all using building with extending many of the projects, many of the projects we love in the CNCF. And so we invite you to come play with us here so to go and check it out. There's a couple of questions here that have come up and I think if we could start with this there is the two different folks ask the question when would you use Envoy versus Istio? And I think the question is like what's Envoy what's the service mesh and is it different? So we have a few of those so do you mind kind of recapping that? Yeah, definitely. So let me give you the basic answer when would you use Envoy, when would you use Istio? So and we did touch a little bit on that here but I realized that some of this stuff is a little confusing. So Envoy itself is the proxy technology. So a request comes into Envoy and then Envoy can do some, you know go through its routing table and so forth and then it'll send it out, right? Istio uses Envoy, Istio provides a control plan for managing Envoy when it's deployed in a service mesh and a service mesh, so if we come up, I think it's this, so in a service mesh where we have an Envoy proxy deployed with each application instance then an application A talks to application B through this proxy. So in this case, we have lots and lots of proxies because we'll probably have lots of application instances and then what Istio does is provide the control plan for managing these lots and lots of proxies for lots of different applications instances. Now, if the question is more when would I use Istio and less like what was the distinction between the two proxies? I was hoping to cover that here and the question really is when do I use a service mesh? So Istio is an implementation of a service mesh. There are others, Linkerd is becoming more popular and it's been around as a brand for quite a while and it kind of led the initial charge around this community. Consul from HashiCorp, they've evolved Consul for being a service discovery registry and look absurd to being more of a starting to fall into this realm of service mesh. We have things like cloud native providers. So Amazon has something called AppMesh. AppMesh is built on Envoy. Consul is built on Envoy, Istio is built on Envoy. The folks at Comm, I believe, are building their service mesh on Envoy and so forth. So Envoy has a large percentage of the market share. So you'll see this question, when do I use one or the other? But the service meshes are using, a lot of them are using Envoy. And so if you're gonna ask, when do I use one proxy versus a service mesh deployment, then you kind of have to ask yourself these questions and then you'll find that a crawl, walk, run approach starting small and growing up from there is going to be more successful than what we've seen in with some of our users in the community that try to bring an Istio in and it ends up being a lot more complicated upfront for them than it is to get started smaller. Great, another question here is, what would drive the coming back to like kind of architecture when you're looking at ingress and gateways, what would drive the decision on having one versus multiple and when you're setting up a gateway for your environment? So one versus multiple, like in this type of architecture, I guess, I mean, I guess the person who asked the question might not be able to answer the question in two ways, right? If you're doing a multi-tier kind of gateway or if you have a single environment but are you trying to have like a failover or HA? Yeah, so you would do a multi-tier gateway architecture when you, and the tiers don't, I mean, so I illustrated this here with assuming ongoing. What's likely, at least some of the larger organizations that I've worked with, what's likely is that there will be a tier outside of the application level. So there will be maybe a hardware load balancer or something farther up that manages traffic at a data center level. And then once that starts to get into the application deployments, then you might want more L7 control, more traffic, more control over how requests get routed into the applications, how you might wanna do a Canary style release where you have just a percentage of traffic going into certain versions of the application and so forth. So you might have, in this case, where we have Envoy at this particular level were able to do very fine-grained request level routing. But the reality is you might have a multi-tier architecture where Envoy isn't all of the tiers. Great. There is a question here on, can you leverage service mesh capabilities in serverless architecture? So that is a interesting question that is still evolving. I think what we did see with things like Knative, which leveraged Istio under the covers inside Knative, we did see that. Now, there's still some challenges in doing that because the server or the function, so if you'll allow me to talk about functions as a service specifically, not serverless as a whole, but in terms of a function environment, some of the challenges around that are that these functions are supposed to come up quick and then they're federally go away, right? In the service mesh environment, the proxies that live with the instances of your application, those are expecting to be a little bit longer lived. And so some of the challenges that you might see in trying to make that fit is the Lambda comes up or the function comes up and the proxy comes up, right? So they both come up at the same time. Now the proxy is sitting waiting to catch up on, all right, what is the lay of the land? What are the services I can talk to? What is my configuration and so forth, right? And so that ends up slowing things down quite a bit and then the function runs and then we'll think goes away, right? And so then it comes back up and does exactly that same thing. It's not all that optimal right now, but I think what we'll see is as the data plane in the service mesh starts to evolve more, some pieces moving down farther into the operating system and the kernel and so forth, I think we might see better support for a service mesh out of the box being able to live nicely in a function as a service style environment. Right now I would say that's still emerging. Okay, let's see, I'm scrolling through this. Thanks for all the questions coming in, folks. There was one, there was a follow-up question to let me, things like going back to like the kind of a glue or gateway and a service mesh, the question is, can they coexist together? The gateway and the service mesh? Yes, absolutely. So, and this is probably kind of a visual of that. So, the gateway itself solves some problems like we said, so let's come back over here. Solves some problems that the service mesh might not necessarily need to solve. So, if we look here, these capabilities, like some of this custom security application, firewalling as a transformation of these types of things, will be useful to boundary but not within the east-west traffic. Now, if you have a service mesh in place, you're gonna need to solve for these edge problems. So, right there, the gateway and the service mesh are very complementary. In the case of glue specifically, we can plug in glue into Istio, for examples. I'm not limited to Istio, but as an example, Istio's mutual TLFs, right? So that as traffic comes into the cluster, goes through the gateway, we apply these pieces of functionality, but then when we forward the traffic into the rest of the service mesh, it's basically a peer of the mesh, right? It appears as though it's part of the mesh. We could also do things like tie into the mesh's distributed tracing. So, request comes into the gateway, we can generate all of the necessary headers and correlation IDs to kick off the distributed tracing and pipe that to the same distributed tracing engine that the service mesh might use. Same thing for telemetry, collection, and so forth. So, there's a nice complementary aspect to it in that the gateway at the edge can provide more functionality at the edge than what the service mesh out of the box will provide, as well as when you have the service mesh in place, it will tie in natively with the service mesh and just look like it's part of the mesh. Cool. There's a question here on some of the functionality within Envoy. What happens if tele... So specifically, what are the things like telemetry goes down for two minutes? Does Envoy have the capability to collect that information in that interval and then be able to kind of have that, have those metrics once it's back up? Telemetry is back up? So, yeah, Envoy does metrics a few different ways. A very common one is using Prometheus. And so with Prometheus, the Prometheus server is scraping Envoy saying, hey, give me your telemetry now and now and now, right? And it keeps the results that it gets in a time series database and allows you to, from there you can export it much longer storage. But so inside of Envoy, Envoy is basically keeping a large set of counters and gauges, histograms, these types of things that Prometheus would then be querying. And if Prometheus happens to go down and is not querying and scraping these metrics, those counters are still there, right? Their numbers are probably increasing. More number of requests handled, higher number of connections and so forth. And when Prometheus comes back up, it will continue to scrape. And Prometheus might not have, it might have a little gap in its time series, but Envoy isn't trying to store them up and starts to exhaust memory, puts pressure on the memory and so forth. Envoy is just keeping counters like it would be doing, whether Prometheus is watching it or not. It's actually Prometheus that's pulling and storing them. So in that case, you might have a gap in some of your telemetry, but it doesn't affect Envoy. Okay, and one last question. There's a bunch of questions here regarding just, you know, comparing and contrast, like specific questions related to like features of proxies, because there are lots of folks in here actually ask questions about, they're using, you know, NGINX or HA proxy and, you know, why Envoy. So there's a bit of like, if you can answer like the, you know, why Envoy's popularity and are there kind of, what capabilities does it bring to kind of the distributed environment? If you can kind of go through that, I think we'll all be able to answer a few different questions that have come in. Sure. So I think the big, the big reason for why Envoy, so I guess there's a couple, but the big reason in my mind for why Envoy is, first of all, the community is very open, is very vibrant. It's not owned by one single company. You know, the technology itself was built to be extended and that's happening even more so now with the support for WebAssembly. So, you know, Envoy's popularity and its uptick and growth and mind share and so on is because of that, is because of that community. The second part I would say is that Envoy was built from the ground up to be dynamically configured. You don't need hot reloads and this type of stuff. Envoy, its control of internally is connected to or looking to connect to an API which can stream configuration changes back and can update in real time in an eventually consistent manner, but it is in real time. And so, especially in a cloud environment where services are coming and going, endpoints are coming and going, you're scaling, you're scaling back, things are becoming unhealthy and so forth, your configuration policy changes need to be dynamic. Those can be all managed in a more central location where, but they're not in the runtime and not in the request path. And then those configurations can be pushed to Envoy and have Envoy update and react to those changes in real time. And so those, you know, the dynamic configuration of the proxy makes it very well suited for cloud deployments and dynamic and femoral infrastructure. And then, you know, the other part is, you know, the people, it's the community, it's part of the CNCF. Nobody owns, not one company that owns it and it is very extensible. So I think those are two really big reasons why Envoy. Why Envoy? Awesome. All right, I think that's all the time we have for questions. So I'm gonna pass the ball back to Mario. Awesome. Thank you, Betty. That's, yep, that's all the time we have for questions. Thank you everybody for joining us. Thank you, Christian for a great presentation and Betty for handling questions. The webinar recording and slides should be available online later today on the CNCF website. Thank you to everyone who joined us today and we really look forward to seeing you at a future CNCF webinar. Have a fantastic day. Thank you. Thank you, bye.