 I think we'll get started. So Dan, thanks for joining us today. I'm not on the website, so Leah, could you tell me how many attendees we have? But we typically have around seven to 10 eventually joined. Yeah, yeah, in between Dan, Lynn, yourself, me, Hamza, and Sushant are also here as well. Sushant from Microsoft and Hamza, you're, who are you affiliated with? Huawei, very good. So Dan, we record these, so we'll make sure that those that couldn't make it today will be able to listen to the recording and we might have some follow-up questions from them. But this work group is part of the CNTF and we are just kind of looking, we kind of brought in CNI to begin with as our first order of business. And now we're looking at what areas of cloud native and networking are of interest to the community. And Leah and a few others had brought up Istio a while back with just service messages in general. So we thought it'd be a great idea to have you guys come on and present to us what is a service message, what is Istio, what are you guys doing? Leah and I, as you guys already know, are familiar with it but a lot of the community, I don't think that's familiar with it. And so, I'm trying to give you guys the opportunity to present what's going on and what's the future of service messages. Okay, Doke, I am happy to do that. And Louie Ryan is just asking me for info, both of them, please send me an update. Thousands of people are in their soil. Sorry about that. So the correct meeting is, sorry, I have so many windows open, it's difficult. Okay, so the correct meeting is this one. The Zoom number is the same, the meeting ID is the one you do if you dial in. Is that correct? Just saying the same number. Yeah, that's the correct number. And then the GitHub page is up to date now. Thank you very much. Ken, we also want to welcome to the call, Rui, Rui, where are you signing in from? Sorry, I'm on the East Coast. Oh, very good. Yeah, I didn't find a meeting, I go to the CNCF Canada, finally find a Zoom link. Okay. Well, there are fresh details up on that page, so very good. All right, let me see if I can figure out how, there we go, share my screen. And you tell me, are you seeing what I want you to see? That is, are you seeing an intraday issues slide? We are. Yeah, sorry. Was that a question? Rui was just saying, he saw it as well. Although right now, Dan, actually it went away, Dan. We saw it for a little bit and then now we don't see it. And now you see it again? Yes, we do. Okay, so what is the service mesh? These are some slides I threw together. By the way, this is a small group. There's no reason to be formal. Please feel free to interrupt with questions at any time. I'm a product manager on the Istio project and here at Google, I invited Lynn Sun, who is an engineer and master inventor at IBM. That's an official title, an award that they give to people who do a lot of inventing and learn recently, she has that, I think that's awesome. I've invited her to the call and we will probably also get Louis Ryan, who is one of our senior technical people at Google who will join the call as well. Product manager should imply to you that if any tough questions come, I'm gonna shunt those off to those guys. And I will start with what is a service mesh? These are some kind of general intro slides I give. Feel free to let me know if this is wrote to everyone in the call and would like me to go forward. At the end I have a feeling we'll probably get to just pure discussion. But let me tell you what we mean when we say service mesh, just to kind of level the playing field. And the first thing about service meshes is that, well we'll say they automate application network functions. Application network functions, we sometimes jokingly refer to as layer eight network functionality above even what the application does. Or you could think of it as layer seven and a half if you want to. And the goal of a service mesh is to automate a lot of things that people have formerly written code around. A canonical example I always use is retry logic. What do you do when a call fails? This is typically has been left to one of two things. Either developers write code about this. I have certainly in my own lifetime done this dozens of times. But what happens when a call fails? What do you do? Do you back off? Do you do an exponential back off? Do you add jitter into that? How many times do you do it? That sort of behavior is the sort of behavior that a service mesh aims to automate so developers no longer need to write that code. And that applies for routing decisions. It applies for circuit breaking. It applies for rate limiting. It applies for a whole host of features. I've got a list later on. But it does want to automate those. But the key part to a service mesh is that it automates those in a transparent and language independent way. Transparent meaning the developer doesn't have to write any code to use a service mesh. The developer doesn't have to. When we first announced Istio, I was at a conference and I overheard some people saying, oh, Google and IBM invented the service bus. And the huge difference between a service bus and a service mesh is a service bus is something you code to. You put messages on it. You use its API. It delivers those messages. A service mesh is transparent. Developers place calls to other systems. And the service mesh transparent. And in a language independent way, takes care of that functionality. And what I mean by language independent is, you don't need to embed a client library in your source code. You don't need to compile something in. Anytime you do that, of course, those things are implemented in language independent fashions. And a lot of what we talk about with Istio could be accomplished using existing technologies if you choose a certain language stack. If you're able to tell all your developers, everyone is going to use Java. And here's the library we use for communication. All of this logic could live in that library. The issue is that it would only work for Java. And what we've found is that many companies who are either with things they find in open source or building themselves are building this sort of functionality but they're doing it in client libraries. And that means that they either have to really restrict what their developers use or it means they have to deal with the difference in reality. Hey, hey, Ron. They have to deal with variability in their client libraries. This is something we faced internally at Google when we used to try and do this with client libraries is different capabilities in different languages. And Service Mesh aims to, by implementing these features in a sidecar proxy, remove any language-dependent needs from that developer. The reason we need Service Meshes, the reason these are getting a lot of attention right now has been this general move towards distributed applications. Because of containerization, because of what's going on at the CNCF really, with containers, with Kubernetes, people are moving to more and more of a microservices style deployment. It's very agile. They can have small teams who are delivering functionality, deploying very frequently. That's great for developer productivity. And it's very scalable. And applications are getting needing to scale more and more and more as you have more users on applications all the time. The Mobile Revolution, of course, has contributed to this. And because people are more frequently than ever deploying in hybrid environments where they're using their on-premises, data centers, maybe there's multiple data centers, they're using a cloud provider, they're using multiple cloud providers. But they have applications that are actually physically running in multiple places simultaneously. All of these trends towards containerization, microservices, and hybrid and multi-cloud deployments have been applications are more distributed. Now that applications are more distributed, the game has changed in what you need to do in order to monitor an application, in order to see what's happening in an application, in order to control a rollout and safely deploy a new component or a new entire version of application. And finally, security has changed significantly from the days when an app was a monolith and you had to secure and expose one IP and one port and that was it. Now you have an IP and a port that's exposed to the world but you also have tons of communication happening within that application and all of that needs to be monitored, secured and managed as well. The goal with separating out that service mesh functionality that service mesh functionality is to keep your application separate from the infrastructure. There was a time when I started my career when I was doing lots of communications software and you were in charge of everything back then. This is before, this is really before the internet TCP IP took over the world. It was certainly before everyone was using web APIs and HTTP and JSON. Adding all that network into network capability into developer tools has made developers much more productive and made them write much less networking code inside their apps. And what we're trying to do now is put even more into the infrastructure. So developers are focusing on building the applications that are relevant to their business. And it also decouples operations from development. Too often people are putting into source code into development the logic that should actually be capable of being changed at runtime at operations time. Doing a canary deployment is a perfect example. We have certainly seen people who would have logic in their source code that says, hey, where do I place this call to the regular service or to the canary version of the service? And that code doesn't belong in your application because then it's changing an application in order to change that code. That could belong somewhere else so that your application even if your developers are doing your ops or running it, you don't want to have to change the source code of your application where the business logic should live in order to change those things. So that's what a service mesh is. Now let me talk a little bit about what Istio is and how we've chosen to implement the service mesh. It is a service mesh, but more than that it's an open services platform. And I'll get into what I mean by a platform later, the APIs that people can integrate with. And its goal is to manage all service interactions in an enterprise. We really do. We started with Kubernetes. We think that most of the development that's happening in the world today is happening in a cloud native fashion and container based workloads. However, we also understand that most of the software that exists today most of the software today is still running in VMs. And when we talk to potential adopters of Istio, certainly people who are doing development in Kubernetes say, yes, I want some of that. I want to understand my traffic patterns better and I want to be able to control or roll out and secure it. However, those workloads that I'm running in Kubernetes are also communicating with other things things that I developed 10 years ago. I need to monitor secure and control that traffic as well. So Istio has a strong goal of being able to handle VM based workloads as well as the container based workloads. There are three main things at the core of what Istio gives you. Observability, control and security. And by observability, I mean what's going on in this application. An application as I said before is now a complex thing. Often many, many services could be dozens. It could be hundreds depending on who you are, how long you've been doing this and how big your app is. Istio has a general goal of helping to understand your application. The first thing it does, and this may seem like table stakes, but a lot of people just don't have this. It collects metrics, it collects telemetry from every service that's deployed on the service mesh. And it starts with what we call at Google the golden signals. If you've read the Google SRE book, you know what these are. The first three are traffic for a service, error rates and latency at various percentiles. Istio grabs all of those and sends them to the system of your choice. That lets you monitor essentially service level indicators for every service. Too often every service that someone brings up in an organization decides on their own what their SLIs will be. There's no uniformity there. It's very difficult. You can't build a standard dashboard. So we believe very strongly and sure you can monitor app-specific metrics if you want to, that's great. But you should start with a uniform set of golden signals. We also collect logs and traces from your apps. We believe in collecting logs on every call. Sometimes people ask us, so what percentage of the calls do you log? We believe very strongly in logging all of them. We don't keep logs for very long around here at Google, but we believe strongly in logging them and having tracing built into your app so that you can run tracing on a representative sample of those. Understanding the state of your service is, you know, signals are important for that. Understanding what happened, what something goes wrong, you need to understand why, you need to root out latency problems. Being able to have those logs and traces are very important. Finally, clearly mapping service interdependencies. This is something that a lot of people don't know and I think the thing that was first driving people to Istio was the ability to easily understand where their traffic's coming from. For any given microservice, what are its traffic levels and who's sending that traffic? That was something that was a bit of a mystery to people who didn't do a bunch of work within Kubernetes. So, help you understand this. And to do all this at the service level, not how many bits are flowing where, but rather how many calls are coming from service A to service B or service C to service B and understand, you know, view your traffic and your dependencies at the level of a service. So the second big feature Istio gives you is control. And basically that is control over how the network operates. What happens when a call fails? What is the retry policy? By taking advantage of what's in Envoy and being able to code on Program Envoy, Istio lets you control things like your retry policy, your circuit breaker. It also gives you good routing and client-side load balancing, which is a big deal. The ability to run and experiment and say all traffic from internal sources goes to the new version of a service or all traffic with an off token that is generated for one of our users is a big deal. The ability to do a safe canary rollout and say let's generate, let's direct 1% of our traffic to the new version of the service and monitor telemetry. Canarying is a really big deal that people who are sophisticated in microservices would never dream of doing a rollout of a service without having a good canary capability. And yet Kubernetes doesn't give you a good canary and then finally things like applying access control and rate limiting to services. And then the last is security and Istio offers the ability to add MTLS in on your interest service traffic. MTLS means every service gets an identity and every call placed from that service will have cryptographically secure tokens that indicate who's placing that call. Once you have a strong identity on a call that means you can then have an authorization policy to say, oh, do I trust this? We hope that in the future people move from IP based security models to service identity based security models. We've done this at Google, we think it's a, both leads to a more secure network as well as a network that's easier to administer, easier to operate. Our architecture is we install a couple of components, Pilot and Citadel that push information out to sidecar proxies. Citadel issues certs so that we can do TLS. Pilot pushes configuration data, it pushes all the routing rules, it interacts with service registries. And of course we do use a sidecar proxy. We've done a lot of work with Lyft, with Lyft, Matt Klein from Lyft on the Envoy proxy. Our team was very happy to find the Envoy proxy when we started this project and they were happy to join us in IBM in letting us use their proxy. We do contribute upstream. We have a bunch of people actually who work on Envoy itself. It sits in the data plane. And then we have a component called Mixer. And Mixer is really where the, I said we think of Istio as a platform. Mixer is where other people can easily integrate. So Pilot pushes this configuration around. It does integrate with service registries aside from Kubernetes. So if people are doing VM based workloads and they're using console or Eureka, Pilot can integrate and read state from console and Eureka so that the service mesh knows how to direct traffic to those as well. It pushes out information about number of instances that are running so that it can do client-side load balancing and it also pushes routing rules. Anytime you're doing a canary deployment or want to route certain types of traffic based on the request, it pushes that data down into the client-side proxies so the client-side proxies know how to route their traffic. Mixer is what makes Istio a platform and not just a service mesh. And that is a set of APIs that allow at runtime arbitrary telemetry and policy backends to plug in. Istio is not a storage system. While it collects, I talked before about collecting the goal and signals and collecting the laws and collecting the traces. All it's doing is serving as a funnel for these. It puts the proxy in place. It knows when all the calls happen and it reports those to Mixer. Mixer in turn has an API that allows those data to be sent to arbitrary locations. Out of the box, we ship a Prometheus adapter but lots of vendors have done their own adapters leaves here from SolarWinds and SolarWinds has an adapter. We've seen Datadog create an adapter. I know that there are some other of the common ATM vendors are working on adapters as well. We have a couple of tracing both Zipkin and Yeager adapters for traces and there's a couple of logging adapters too. So Istio does not aim to be the storage for any of these but rather a common API. And the goal is that rather than create client libraries in different languages, rather than have people set up say log scraping and parsing pipeline which is very common. We place this proxy that all traffic goes through and that reports all this information in a standard way so that a mixer can, I mean so that a vendor can with an integration into one API collect data from all the services on a service mesh. And again, that's without involving developers writing and baking in a client library. We do the same thing for policy and mixer's policy API is used at runtime as a synchronous request to decide if a call can go through. Examples of policy would be do we accept calls from this IP address if things are coming externally? It can be of course do we accept calls from that service identity if the call is coming internally. Other policies are rate limiting policies. We at Google have an integration with our Apigee API management which allows us to do things like hey, this API call has an API key in it. Can you check to see if that's valid and if there's enough business quota left to run? So we see again, the interesting thing from vendors is with a single API integration, they can now offer their service for arbitrary number of backends. I did say this call is made synchronously policy calls. It is made synchronously but we do use heavy caching at both the proxy level and the mixer level to maintain performance. And then we have Citadel. Citadel is a certificate authority that issues certs and handles things like rotation, cert rotation so that all of your calls optionally can be secured. And as I said before, our goal was to let Istio manage your traffic for all workloads, not just Kubernetes based workloads. So we did start with Kubernetes because it was very convenient and because we can do things like auto-incheck the proxy which makes adoption easy in some cases, but strong goal to support non-communities based workloads. So here's where I always ask the audience, how deep do you want me to go? Do you want me to walk through the life of a request in the mesh or do you want to assume people here kind of understand what that's about and wanna get to more where are we, what features exist, what's coming? I don't know if Zoom has a poll feature. It's probably not, but I'm opening up to others on the call. Do you guys wanna go to the feature? Do you want more of the intro part? I think here we go to the features. Yeah, okay, I'll share these slides. They do go through what happens when a call is made on the mesh and you can make them. So let's go to the features and then we'll open it up for questions because that's what I really wanted to do and why I made sure that it gets here. We have reached Istio 1.0, actually 1.0.1 was released last week. Very happy to announce that and I'll talk about some of the features and what the state is and they're kind of grouped into these four groups. We are adopting the Kubernetes rubric of evaluating features which is alpha, beta and stable. When it features in alpha, the feature may change where we're evaluating it, we're evaluating our implementation of it. When we reach beta, we don't expect the surface of the feature to change at that point anymore, but we're working on hardening and when we reach stable, then we know people are using it in production as far. So this is where we got to with Istio 1.0 and I hope you can see the color coding there. I've got my gray, my stable features are in gray. The green features are in beta and we do think data is ready for production use by the way and then alpha start using them but they may change. And you can see that in terms of traffic management, probably most interesting here, we are supporting HTTP11, 2.0, GRPC and plain TCP traffic. One thing that's interesting is you'll see there's MongoDB has popped up there. One of the, I said before that we're very happy with the Envoy community. And one thing that's great is to see database protocols starting to be built into the proxy itself. And that allows the possibility of really interesting things. You don't typically think that a proxy sitting between you and a database would be able to do something like tell you how many reads you're doing for the database. And yet when we understand the protocol, we can do things like that. You also don't think about a proxy being able to do things like direct your reads to one instance and your writes to another instance. And yet as Envoy learns more and more about the protocols, it will actually be able to do that. That's a simple thing and it's not an unusual thing to think about with HTTP. But as the proxy gets more functionality for these protocols and we think Envoy will be doing that, we'll be able to do a lot more things. We do support ingress and egress as for Kubernetes with our gateway. We can terminate TLS in the gateway. SNI allows that to be from multiple issuers, of course. As you can see, you can do label-based routing, traffic shifting, and there's a bunch of the Envoy features that are timeouts and retries, connection pools, outliers, all of that is in there too. We are working on custom filters in Envoy. And one of the big kind of outstanding philosophical issues is how much logic do we want people to push into the proxy and how much logic should be implemented at Mixer. In terms of observability, we've got Prometheus, I mentioned before, we have a dashboard in Grafana, all kinds of things are in process there. As you can see, including SolarWinds, Google, we have a product called Stackdriver, ZipGain, and Yeager. We're working with the OpenCensus project on observability. And in security, we have a pretty good story now for service-to-service TLS, getting those credentials out to VMs, allowing you to do incremental MTLS, so telling a service, you can accept MTLS, but we have some legacy clients out there, so you don't have to require MTLS. That's important for people to be able to adopt this. And we're working on end-user authentication as well, the ability to validate Jots and perhaps use an open policy agent policy to describe what you will accept. So some of the things that changed in 1.0, I already talked about permissive mode or incremental MTLS. That was something that was new before MTLS was all or nothing, either your service required it or it didn't, and it made it difficult to slowly adopt this deal. I've talked a little bit about mixer adapters, and we've changed the adapter model and it made this tremendous improvement. Mixer adapters, which allowed a policy agent or a metric or log collecting agent to be integrated from another system, those were formerly compiled into Mixer. Now, those adapters are out of process and we have Mixer implements a GRPC API so that you can, at runtime, without redeploying Mixer, we're sorry, rebuilding and then redeploying Mixer, you can deploy new adapters. That's new and that's very powerful. And then finally, we made an improvement with authorization policies that if you are simply using authorization policies like is service A allowed to call service B, that logic is actually pushed down into the proxy now and it doesn't require calling out to Mixer. So even at the policies like that, do, if you're calling Mixer, tend to affect you not on your median request but on your say 99th, those are cash pretty heavily, so maybe on your 99th percentile, but this improves that even at the 99th percentile we won't affect latency on off calls. So we've got, I just checked yesterday because it's perf season and we got to know the metrics, 244 people have committed code to the Istio repo. We started this project with IBM but we've since then gotten participation from a lot of people. And when people ask, how do we differ from other service meshes? There's a couple of things. I do talk about our emphasis is platform and our desire to have one API that any vendor or backend system can integrate with in order to integrate with service meshes. But I also talk about the community that we're building and the fact that in addition to IBM now, VMware, Cisco, Red Hat have all joined us and much like that, what happened with orchestration? We think that the industry will be well-served if everybody consolidates around one. And lots of adapters have been including our own adaptive create that answer. Where is this running right now, running in production? Well, of course it's open source software so mostly I don't know. So we don't get to find out where people are running it unless they happen to tell us. These are some that have told us and have told us we can talk about them publicly. The weather company was the first large scale deployment. I think I remember the number, Lynn can correct me at 400,000 requests a second coming from the weather.com site and APIs. Descartes Labs is a Google customer was the first custom company that we know of that was running in production. They really needed the observability but we've seen some big names, both in HP and E-Day and some names that maybe you weren't familiar with before but are doing really cool things in auto trader and maybe. And again, these are just the ones that we know of and can talk about. There's a bunch of others that I haven't had permission haven't been given permission to talk about publicly and most of the ones running, we probably don't actually know. One of the points of emphasis and I think this is my last slide is that we've been talking about a lot is that many companies that are adopting Istio are not adopting all of the features. MTLS is a perfect example. Not everyone cares about MTLS. Some people trust their network. We at Google don't. We think it's a really good idea to secure every service in addition to securing your perimeter. But for a lot of people, that's more of where they're willing to take on. And that's an example of kind of all a cart or incremental deployment. We see other companies who are using it for traffic management, but using their existing ingress because they put a lot of work into their existing ingress. And we have another company I've seen that is using Istio for now for ingress only. And it's just to kind of get their feet wet. And I got a screen cap up here of a feature I think that Lynn worked on, which is a minimal Istio installation. And this I think will be a very common one, which is you just want to use a pilot. A pilot is a good API for configuring envoys. And it's envoys starting to get a lot of use around the industry. We're hoping that people actually consolidate around pilot as the best way to push the configuration to your envoys. And the minimal Istio installation is essentially just giving you a pilot so that you can push that configure out. If you do want to get involved, we have eight working groups. Our networking group meets every two weeks to 11 a.m. Pacific time. Go to Istio.io if you want to get information on that and join them. And the other interesting thing I wanted to bring up was one of the concepts that's just getting kicked around here now. And that is can you accomplish what's going on in Istio without a sidecar? I talked at the beginning about how sidecars are awesome and sidecars are great because they're language-independent. There is a little bit of a tax in terms of your deployment. You do need to deploy with them. And they do take some CPU and add some latency. Now, envoys is very efficiency, very efficient, both in terms of the latency it adds as well as the CPU that meets. And those are both things that we know that we can improve even from where they are. Right now, under ideal conditions, we know that it is, I think the latest figure I heard was 600 microseconds of latency. It's adding at the 95th percentile. So it's getting very efficient. However, there are use cases where microseconds matter. And in many of those use cases, people are using GRPC. And so one of the things that the GRPC and Istio teams are looking at is for GRPC, building in the logic into the GRPC libraries themselves, giving you the abilities of Istio without putting a proxy in front of it. This is not something that exists today. It's something that the teams are kicking around right now. Again, for very, very high-performance use cases, there might be some cases where that's necessary. So those are the slides that I had. With that, why don't we turn this to a discussion? And as I said before, Lynn is on the call and I was hoping she would be able to join so that we could, if anyone had had questions, they could turn to her instead of stump me. But do people have any questions? First down, I thank you for your time today and appreciate the discussion overview and open it up to any questions or discussion anybody wants to have. And again, going back to the proxy list, Istio, with the grant, it's just sort of a thought or a collection of discussions. Right now, if that were to play out, what form does that take? Is that surface mesh capabilities? That sounds like a surface mesh capability is built into a GRPC without necessarily need for, rather the proxy capabilities are in, I actually guess sort of all of the capabilities, including those of the proxy. So it would mean GRPC essentially implementing the XCS APIs. So we need the ability for pilot to push the same config that it pushes today to a proxy that tells it how to route calls. We push that directly into the GRPC client libraries. And of course, anyone who's using GRPC is already using client libraries. You need GRPC client libraries to place a GRPC call. So you've already kind of bitten off that am I going to have a client library now? And then it would mean that the two runtime calls essentially check and report that it would, GRPC itself would be able to check against Mixer for policy and would be able to report telemetry to Mixer. It is an architecture that we use, we used a system internally at Google called study that essentially did these things. And it really does come down to use cases where microseconds matter. And the one thing I would want to emphasize is that mostly microseconds don't matter. For most things that most people are implementing, milliseconds matter. And for some people, even tens of milliseconds matter. If we get, as I said before, right now, the envoy is in ideal conditions, as I said, under a millisecond of latency for almost everybody in almost every use case, that's great, that's fine. Now right now when there's lots of simultaneous connections, it starts adding more latency, it's getting close to like the 10 millisecond. That starts impacting a lot more people, especially if you have lots of internal calls for every external call, right? 10 milliseconds isn't a lot, but if you've got 100 calls with 10 milliseconds each, that does start to add up. So you just want to emphasize for most cases, the proxy itself is a really good solution. However, when there are people who are in use cases at a traffic level, such that microseconds matter, they're likely to be adopting GRPC for those anyway, because it's much, much more efficient than H2P11 and JSON. And in that case, it might make sense. So I think this is an area of R&D right now. This is not an alpha feature. This is just something that our teams are looking into. Yeah, nice. Oh, I see there's questions in the thing. Let me look at those. The slides, Nishith, yes, I'm happy to share these slides. I'll provide them to somebody and mail them out. The comparison of Linkerdee conduit and console, and I can see that Lynn has already started to answer that. Conduit and Linkerdee, they've merged those projects. I think they've said, conduit is Linkerdee 2.0, right? That's essentially what they've said. And I think that was good news. The big critique that I had heard about Linkerdee was kind of footprint and performance, and they started over with conduit as it made something that is much more performance. As far as console, you want to answer that. You've got the gist of the answer there, but do you want to give a little more color around that? Yeah, sure. So in fact, we actually were just discussing this with saying our team yesterday. So you guys know we had this console adapter integration with STL. Well, it would read the service registries from console and propagate to pilot. So pilot does a well of the services that's registered with console. So just recently, I think it was last week, console actually released this new thing. They call it Kubernetes and console. Well, when you run console with Kubernetes, you can have the services propagated automatically from console to Kubernetes and from Kubernetes to console. So we, our team are just looking at, where are we going to position our console adapter for pilot within STL? Should we be looking at this new feature, which would automatically propagate the services from the console registry to Kubernetes? So we are investigating on this and we do expect to publish recommendations and guidance by representing the STL community on, hey, this is how we expect the new console feature with Kubernetes and this is how you should use it with STL where hopefully that you would be able to add in STL to this feature, to be able to leverage the features of STL that Dan was talking about, like telemetry, traffic management and security and policy enforcement. So that's kind of where we are in investigation mode. Stay tuned. And I'll say one other thing that I said earlier, but I want to emphasize it. The biggest difference is that these, STL is really the only service mesh product that is coming from a lot of players in the industry. The others each have a company behind them and eight companies resources. We are putting significant resource in this. IBM is putting significant resource in this. DMware, Red Hat, that speaks to, we hope a kind of general agreement about, hey, we'll find the best way to do this as an industry, but also the amount of resource that's going into it. This field is changing very rapidly. We're implementing a lot and there are many, many dozens of people working full-time on STL right now and that's a big distinguisher. None of those other projects can have 50 or 60 people working full-time on it and Istio does. Yeah, and then the other thing I would add is, you guys probably know, I think a conduit was launched in KubeCon last December as it's like the initial release and it was perceived as kind of a competitor of Istio back in December last year when they had this initial announcement. But recently they, the fact that they are moving conduit back into LinkedIn is kind of a sign that we see conduit, they didn't see much value of having conduit by itself without part of the LinkedIn umbrella. So they kind of move it back to be more integrated with LinkedIn. So it's kind of, I feel like it's kind of not as a strong competition with Istio anymore, with that move and I feel the team kind of recognize that as part of the move too. I don't know if you guys agree. Yeah, I can say a couple of comments. Yeah, that could be quite possible. I think part of, or I know that I've spoken with that team quite a bit. That part of the logic there was not only did the LinkedIn, like the notion that when conduit was announced existing LinkedIn users, part of the perception is, okay, well, great, so we invested in LinkedIn and now it's going nowhere. There's a new project that now we need to figure out how migrate to and so even though that wasn't intended to be the message, that's just sort of implicitly, you know, implied as part of an announcement of a new project. And so in some respects, like sort of saving those users by saying, you know, we're really supporting you. There's a net new architecture 2.0 coming out. Also then conduit, in some respects, sort of a, I wouldn't say it's not sneaky, but it's not, but because they made it very public and they had the CNCF vote on it, but there's a way of getting conduit and that code base into the CNCF as well, given that LinkedIn is there now. And then conduit itself benefits from the couple or more years of name recognition that LinkedIn has. And so the users significantly benefit from, like not only the two things that Dan mentioned, but also the simplicity of install, which I think was another kind of challenge for LinkedIn was the complexity of the deployment. And so yeah, so there's some additional perspectives, just adding to what you said about how LinkedIn and conduit are now positioned against, against and or, you know, adjacent to Istio. Going back to the real quick, the this week's announcement about Kubernetes plus console, am I misinterpreting that or like is an integration or does this really affect the integration with Istio and console? Or wouldn't you just say, like, hey, if you're running the console plus Kubernetes integration where they're synchronizing back and forth between STD and console, wouldn't, you could just deploy Istio with the STD, with the Kubernetes integration as is today and leave and not run the console integration, just leave it off because you wouldn't, you know, it seems like in some respects you wouldn't have to refactor the console integration. Yeah, that's exactly what we are sought out, right? So we built the console adapter for Istio before this new announcement was available. We built it about a year ago, right? So there was a need when we built it that to be able to propagate the services in console back to Kubernetes while Istio control plane runs. Now with this new announcement, we're trying to figure out, is it still worth of engineering resources to keep maintain the pilot console adapter now that it's clear the console community has been choosing a new route by supporting the console and Kubernetes integration. And you guys probably seen our console headers announcement for connect support about maybe a month or two ago, which they add our envoy support as side card to handle mutual TLS within console connect. So it's kind of clear on the console community has chosen this route while they see things to go. And then from Istio community, IBM has done initial contribution of the console adapter. We haven't been getting a lot of tremendous help from the console community. So we're trying to reposition ourself to see if the new support from console with Kubernetes would actually maybe not needing to have the console pilot adapter like you were just saying. So we expect to kind of providing guidance to use how Istio can bring value to console and Kubernetes integration because that integration is all only about service registry propagation, not so much about the function provided as part of the service mesh. Thanks, Matt. Yeah. Yeah, make a lot of sense. Thanks for the insight. Any other questions? Is there a question? So I did, one other thing I'll say, and I didn't have any slides on this, but there has been some interesting work done. I know Thomas Graff is in this working group, I think, is that correct? Yeah. Yeah. Yeah, so anyway, they have an interesting work with CNI and with a circumventing part of the networking stack. In fact, the customer HP who I mentioned is actually using Istio with, sorry about that, with, I'm just getting a message here, of course. That always happened when I started talking with surveillance work. And I think that that's some other interesting, as you guys look at what's gonna happen in CNCF, that's the other area, like interesting area of development right now. Yeah, super, yeah, EVPF, super, super interesting. Well, it's like, I think Thomas is about the only one talking about XDP, man. Nebulin would suffer a little bit of challenge, well, not EVPF, but XDP suffers some challenges around later Linux distributions or later Linux kernels. But yeah, the performance numbers are phenomenal. The lower level layer of control that you can get by writing like an EVPF program. I think part of the challenge there is like, is the notion that some of those programs are lower low level enough that maybe not as easy to pick up as an SDO adapter or use something that's... That's right. And if we can get that right as an industry, which is, you know, SDO, especially with the Simflite adapter model, is an easy way to do an integration and will be the right way for most people. However, when people need to, for performance reasons or other reasons, do alternative things, there are ways to get deeper in and to integrate deeper in. I think that will be the right model. What Cillium is doing is very interesting. I mentioned before that one of the areas of interest for us is how much is the right amount of logic to push into the proxy itself? And then we have people who are looking at things in the assembly and whether or not there are easy ways to, or at least ideal ways to write logic that can be injected all the way down into the proxy. Anyway, but what Cillium is doing, and I think they've written kind of publicly about this case at HP is pretty interesting with respect. Yeah, I agree. Well, I definitely appreciate your time. We probably have other questions or comments that may bring you back to talk about a few other things with us and not too just in the future. But thanks for your time, Cillium. Everyone enjoy your question Tuesday. Okay, thank you very much for inviting us. I appreciate the opportunity. And yeah, I look forward to interacting with you guys in the future. Take care, bye-bye. Thanks, bye. Okay, see you then. Bye-bye, Dan. Okay, see ya. Thank you, bye.