 Alright, looks like we are ready to get started. Alright, let me quickly introduce myself. My name is Lin Sao. I'm a director of Open Source as Solo. I've been a very long time contributor, one of the founding members of the Istio community, currently serve on the Istio Technical Oversight Committee, and Louie. Hi, I'm Louie Ryan. I'm the CTO at Solo. I've been with Istio project since the dawn of time, possibly even a little bit longer than that. And so just really excited to be here and talking today about what's going on and what's been going on in the project. Awesome, a quick highlight of 2022, which many of you probably already know, Istio is sales to CNCF as an incubating project. We introduced Istio Ambient in September last year. There was a lot of excitement around Ambient. We introduced the new Psycholids data plane mode without any disruption to your application. We've done a bunch of future promotions of Psycholids. We will continue to support Psycholids. We've done external or gateway API and workload group and some of the other promotion to either stable or beta. We've also enhanced the security posture of Istio as we prepare for Istio becoming a graduation project for CNCF. We've also published the security audit results. So if you haven't seen that, check it out. We've also added additional platform support. So ARM is one of the biggest thing. How many of you are using the new ARM laptop? Awesome, and how many of you tried the Istio support with ARM on your new laptop? Nice, very cool. We also added dual stack. How many of you are looking at IPv6 for Istio? All right, a few of you. Very cool. This is a service mesh survey just to show you the continual momentum around service mesh. The sources from CNCF, as you can see, more and more users are looking at adopting service mesh and the top challenges within service mesh is really around the shortage of expertise and experience to adopt a service mesh and also some of the technical complexity and architectural complexity which we're hoping Ambien would really help solving in that space. Okay, so what are we going to be working on this year? Have we already been working on this year? So if you looked in the previous survey, one of the top requirements was how do I get the value of MTLS, the security features of a service mesh into production as quickly as possible. One of the big value adds of Ambient, and what John and Christian went through a little bit, is that it's a much easier model for you to deploy a solution, like the Z tunnels into your network that will turn on MTLS and so you can get through that process much faster. Our goal with Ambient is to be able to get you from zero to MTLS for a whole cluster in a very small number of administrative steps and so that's really trying to accelerate the time to value, like tie what we're doing to what people are saying they need out of a service mesh. And so this is a huge part of 2023. Christian also talked about TCO, the operations for a service mesh, like how much work, how much complexity is involved in a service mesh and what do operators have to do to maintain it, install it, make sure application developers are aware of its presence, are able to leverage it. And so what we're trying to do is create, instead of a giant pile of features that if you inject a sidecar into your application you can use them but if you don't you can't, there's a much more graduated experience where the features of a service mesh are just there in the network if you need them. That's why it's called Ambient, right? They're just there and if you write a policy that says I want authorization policy then authorization policy is enforced. If you don't write it, you don't get that. At no point in that process did the person writing the policy ever think about infrastructure, right? Our job is to take care of infrastructure for you, not make you think about it. And this extends from application owners and service owners to the people installing and maintaining Kubernetes clusters in the service mesh itself, right? And trying to just lower that operational profile. Obviously last year was a big year for Istio, with the contribution to the CNCF and we're going through the process now trying to get graduated. We have our ticket open with the TOC at the CNCF as far as I can tell met all of their requirements but it's really been gratifying to see the effect that that donation to the CNCF has had on the community. We get more engagement. You see more vendors, more solution providers now providing Istio solutions than before. So the community keeps growing. I saw an announcement this morning from Azure. I think they're now offering support of my Microsoft and Istio deployment as part of their marketplace for AKS which is a pretty big step for the community and it's really gratifying to see the effect that being in the CNCF has on the project. And then lastly, right, we had this team the last two roadmap years which is try to be boring and predictable, right? We are critical infrastructure. Critical infrastructure should not be that exciting to operate, right? It should do what you expect it to do and the operational steps to keep it in that state shouldn't be too crazy. And that's been a huge focus of the project the last three or four years and that will continue to be a major focus of the project, right? Even though we have a lot of exciting features a lot of really powerful new things coming we still want to be enterprise grade, right? To use that kind of boring old phrase because it matters, right? You're not putting this into production and you're like doing it as a hobby you're actually trying to meet some important goals for your enterprise. So, you know, we have our focus areas, right? There's going to be a lot of talk about ambient, right? And I kind of banged on a little bit about it but we want to get that to production readiness. There's a lot of work going in there's a lot of people in the community involved there's very active community meetings the pace of development is very high but we have to do that carefully, right? And so, if it's not ready we won't tell you it's ready if it is ready, we will but we'll keep you informed about what's going on. There's a lot of innovation going on in the API space we've been working with the Kubernetes community particularly on the gateway APIs there's an effort being led within the Kubernetes community called Gamma which is the gateway API for Mesh using kind of Kubernetes as a standards body to define APIs for Mesh use cases that you could expect multi-vendor support for or multi-platform support for and standardization is almost always beneficial to you the end user as long as the standards are good, right? And so our involvement in this effort is to try and make sure there are good standards, right? There's no point shipping a standard that doesn't do what you want. We're going to keep on with our stability and feature promotions, right? That boring theme, we want to be boring and then you know, there's an ecosystem of stuff out there, right? There's always new standards, new other major open source technologies that are important to you in your use cases and so we're constantly looking out to see what is the right mix of technologies and initiatives and projects that we should be engaging and interacting with to make sure that we are giving you all the support that you need to be a successful enterprise and user of what is a large ecosystem of stuff here, right? You see the CNCF landscape, we can't integrate with all of it, it's too much but there are things within it that are very material to the things that we do in Istio and in service mesh, so we want to make sure that we're working with those things. Tentatively speaking really tentatively, we are looking at driving Istio MBM mesh to production this year so Istio 1.18 which is going to be along released next month actually we have alpha builds already available for you to try which MBM will be part of the release and will be alpha in that release you might be thinking alpha might be a little bit too early for you to try but because this is very new we really want you to be a partner we really want your feedback to help us shape the future of MBM so please try it. Istio 1.19 is where we're looking at graduate MBM to beta and with Istio 1.20 which we plan to release towards the end of the year where we want to drive MBM to production ready so check out the Get Started MBM mesh guide there's actually a ton of work for us to drive MBM to production ready I'm just listing a few here some of the ones we're currently working on within the community is looking at multi cluster support when we initially launched MBM we've been focused on single cluster but we know a lot of you are not happy just with one single cluster you always need a high availability multiple cluster so we're looking at that along with virtual machine we do believe a lot of our workloads still resides on VM performance improvements we're looking at some of the performance improvements continue running MBM with sidecar as far as throughput latency numbers continue to improve Zetano and Waypoint performance multiple Kubernetes platform support I heard somebody was saying this morning MBM is only for GKE that's completely not true and I don't work for Google so we're definitely looking at support MBM for OpenShift support MBM for AWS and Azure Kubernetes platform and also we're looking at some of the CNI network compatibility with MBM such as Colico and Cilium so those are going to be important for us this year okay one thing a quick thing because I talked a little bit about TCO and then talked about the Zetano performance so one thing that we're finding having this finely tuned Zetano and tailoring it very specifically to this role is we're starting to see in our testing significant resource footprint improvements for ambient versus the traditional sidecar model I'm not going to quote hard numbers but I'm just going to say that they are large and very encouraging and you should expect to see things coming out of the community about that in the not too distant future but I'm not going to steal anybody's thunder too much on that one but it's pretty exciting oops sorry so I talked a bit about the gateway API earlier the gateway API is this initiative in Kubernetes to provide a better and more reasoned API experience for configuring typical gateway use cases like middle boxes, ingress etc to really meet the needs of enterprise you know Kubernetes ingress has a lot of limitations that people have papered over with a C of annotations and so this is the Kubernetes standard kind of traffic management like layer 7 traffic management and you know we've been very active working with the Kubernetes community folks like Bowie and Keith who's here and also Rob Scott in trying to define that API push that forward and so we already have implementations of those APIs in Istio right for those use cases and we will track the gateway API evolution if it goes beta we go beta pretty quickly after that so we've actually been one of the leading deliverers of support for that standard for Kubernetes and in addition there's an additional work stream within gateway we call gamma the gateway API for mesh there's a long definition of it up here which is on the gamma site but if you don't want to read through all of that right it is about creating an industry standard with using the Kubernetes ecosystem is kind of like the standards body for defining traffic management right specifically layer 7 traffic management and so that's what we're working on it's a collaborative effort across the industry other mesh vendors and other mesh solutions are participating in this and so we're driving that one along and so if standards are important to you in your deployment then you can expect first class support for that from Istio in addition to the Istio APIs right so we'll make sure that you have coverage for whatever APIs you want to use well MBA is the future of Istio we know many of you are using PsyCa so PsyCa will continue to be supported for a very very long time so as a community we are also focusing on stability of our PsyCa architecture so in fact I don't know if it's here Kauston you guys are driving thank you guys for driving Istio safe mode in the community so with that it's going to produce a safe mode that's not going to break you going forward Telemetry API is one of the future we're looking at promoting beta and stable the Intel team I don't think you are here it's been driving IPv6 dual stack which is really useful and they are going to continue driving future promotion for that along with EBBF based traffic redirection for Ambient which is super important for Ambient to support other CNIs Kauston I think you are here you mentioned GRPC control playing with Gateway API is super important so thank you for driving that also in the community along with our existing WASM plugin resources a couple of community members also expressed their interest for that yeah so the just touching a little bit on the GRPC part a little bit GRPC itself for super high performance applications maybe you don't want any proxy in the data plane at all you want to do everything end to end Istio will actually act as a control plane for GRPC directly so you can use it to distribute policy and controls down into the GRPC runtime and have those applications communicate directly with each other over MTLS these are for people who are very focused on performance and are willing to re-architect their applications to meet those needs but there are definitely people out there with those requirements and we want to make sure that we are able to meet their demand finally I talked a little bit earlier about the integration landscape there is obviously a large community of stuff many of you use the CID CD side the data plane or helm or terraform or all these other systems there is also all the data plane integrations that we do there is a lot of work gone in recently into open telemetry and open telemetries you know has a lot of traction within the kind of broader cloud native ecosystem for telemetry and logs and tracing and other integrations so you see that both upstream and downstream we have done a lot of work with that Spire and the CA integrations right and identity provider integrations at our station systems working in kind of complex deployment environments where there are different identity providers that is critical because if I asked how many of you are using more than one IDP in your enterprise I think probably all of you would put your hand up whether it is the cloud or something on premise obviously we have Kiali and Kiali has been a partner for the SDO project for a very long time and we will continue to evolve along with ambience in particular we continue to integrate with pre-medias with Jaeger, Wazem and there is a long long list of things that we would provide integrations with so we will be keeping up with pace for what we see as demand in the community for working with these other systems already well that's I think that's our last slide so we want to take a minute to thank you all I don't think we have much time for questions do we okay so excellent