 Name is Norman Sequeira. I'm a director with the Cloud Solution Architect team in Microsoft. I lead a team of Cloud Solution Architects and engineers who are focused on helping our customers build great solutions in Azure. Today I'd like to take about 40 minutes to walk you through Open Service Mesh, the why, what, how. At the end of it, I hope I'll leave you with some understanding of us to what Microsoft's Open Service Mesh offering is all about. That said, let's start off with the agenda. So here are some of the topics that we'll cover as a part of this particular session. We will try to understand what is a service mesh all about? What are the primary components that actually go into a service mesh? What is a service mesh interface? Then we'll try to understand Microsoft offering in this particular space, which is Open Service Mesh. We will then try and walk through these steps that it takes to actually set up an Open Service Mesh for your Kubernetes cluster. And we'll delve a bit into the roadmap. So first, let's start off with what is a service mesh? With the proliferation of microservices, there was something that was actually required to be able to support service to service communication in a secure fashion. So one key thing about service mesh out here is that it actually started off more as pretty much a network infrastructure component. And it's not really an application-specific component. The idea was to actually have a component that are set outside the application and take care of key aspects like service to service level, security, monitoring, traffic routing. So all of these components were to be done outside of the application, right? So I think the key takeaway out here is think about service mesh more as a networking service mesh or an infrastructure component that actually provides these capabilities and abstracts the application developer from actually building these capabilities as a part of their overall solution. So what does the overall service mesh architecture look like? So one key aspect out here is obviously if you're trying to build something which actually monitors service to service communication, you would need some sort of proxy component that actually goes and sits alongside your application, right? So that's the side curve pattern that is implemented by most service meshes. We will see as to how the service mesh platform actually uses a side curve to be able to actually inspect the traffic that's actually going on between the services, right? So if you look at some of the key capabilities that a service mesh will actually offer you, routing between services, let's say you wanna control who is able to actually access your services. You need to be able to control the ingress, you need to be able to control the egress. You need to define certain traffic routing and splitting patterns. So let's say you actually wanna do a split of 50-50 between multiple versions of your services that you might actually set up, right? A service mesh will come in handy for you to be able to do that. So overall, yeah, as I told you about us to what a service mesh is all about, but what are the key considerations that you should look at while evaluating and selecting a service mesh platform? So the first thing really is do you really need a service mesh, right? If the application that you're developing is a pretty small-scale application, then you should definitely inspect and check if you really need to go down the service mesh route, right? So do that. I mean, if it's small-scale application, you may be able to get by without having a service mesh implementation. Do you need a service mesh that actually spans clusters? All the service mesh offerings that are in this space today have different capabilities and especially if you need something that actually spans multiple Kubernetes clusters at a time, not all service meshes may actually offer that capability. So try, when you make the decision on which service mesh to go for, this should be one of the key creditors that you should factor in as a part of the selection process. Another key aspect would be, let's say when you're looking at service to service communication and actually having a service mesh control that, do you need a mesh that actually just works with Kubernetes or does it span other forms of compute like virtual machines, right? Do you need a service mesh that actually supports Windows containers, right? Let's say your deployment happens to be on Kubernetes clusters which are based on Windows, then what is a service mesh offering that can actually come in handy for you? Do you need commercial support, right? By the provider itself. So you may need to evaluate that criteria in case in your service mesh selection process. One key aspect obviously is in terms of what are the overheads of actually operating a service mesh, right? Yes, a service mesh does give you all the capabilities, but just by workshop having, let's say an inspection pattern in the middle, you are bound to actually have a bit of a, let's say an impact from performance perspective. So try and factor that in, right? So what are the overheads, try and understand that. Do you want your policies to be enforced transparently, which is what the service mesh, the networking service mesh does, or do you want the developers to actually build in and work with a lot of mesh like capabilities as a part of their application development process itself, right? In which case your decision process might actually lead you towards maybe a dapper sort of implementation instead of a service mesh. Let's try and get a broad understanding of the service mesh landscape. I think one of the most full-featured, let's say complex but extensible offerings out there that can actually span multiple clusters would be something like Istio, right? If you're looking at something relatively lightweight, linker-D, that has less features as compared to the other platforms, but that's something that you could actually look at. If you're looking at a mesh that actually spans multiple forms of compute as we discussed, right? Virtual machines, virtual machines, Kubernetes clusters, CNC of governments, then you might want to look at console connect, and then we will talk about the open service mesh offering, right? So you do have a plethora of service mesh options, and you may want to pick one based on the capabilities that we just discussed. So let's try and first understand what is a service mesh interface? So one key aspect is with the number of service measures that are actually being developed, I think it was key that there was a standard spec that people could actually stick to. So service mesh interface is a spec for a standard, or another specific spec for a standard interface for service measures and Kubernetes. These, the SMI interface can actually be installed on Kubernetes using Kubernetes customer source definitions and extension APIs. The idea behind having an SMI interface is to provide a basic set of features for most of the common use cases, especially if you're looking at, let's say, NDLS, if you're looking at traffic routing, then those features have been defined as a part of the service mesh interface itself, right? Now the key aspect out here is why do I need a service mesh interface, right? Think about it akin to, let's say the way we have components like ingress as a part of our Kubernetes definitions. The key aspect is the ingress itself can be implemented by multiple options, like maybe an application gateway or engineering such a proxy and so on. So it's similarly out here too, you could actually look at having a service mesh interface based implementation, right? So you're tooling your ecosystem could just talk using the SMI interface and the underlying services can actually be provided by any one of these service meshes that are actually implementing the SMI interface. Today the service mesh interface does cover the basic set of use cases and we do expect the interface itself to kind of encompass and include more and more capabilities as we go along, right? But today we do have, let's say the most key aspects of a service mesh being covered as part of the SMI interface. Let's understand some of this capabilities in a bit more detail. Yeah, so key aspects that you'd associate with a service mesh, traffic policy and access controls. You wanna be able to restrict which parts can communicate with each other, which parts are accessible. You wanna ensure that service to service communication is encrypted and secure. You wanna pick up telemetry from your services, right? Metrics, what is the kind of latency between the services? You wanna pick up that information. You also wanna be able to do, let's say shift and let's say route traffic between different services and let's say from a progressive delivery scenario, you may wanna look at, do I wanna go around the Canary route, the Blue Green or the AP route, right? And we'll see as to how some of the traffic management capabilities are part of the SMI interface. So this is what the implementation looks like in the context of SMI, right? The apps, the tooling, the ecosystem, all of these are actually talking to the SMI interface and the SMI interface, I will shift and the SMI providers in this case are adapters which are being implemented by multiple service mesh platforms today, right? So the key thing out here is your tooling can still talk to a common interface and that can get implemented by different service measures. So the key aspect out here is you're not tied to any specific service mesh implementation, right? But you still get all the features and you can actually have scripts that can, where your service mesh implementations can actually be swapped out. So today we do have a multitude of service meshes which actually support the SMI interface and this number is increasing. So we do have the most common ones like the Microsoft open service mesh, LinkerD, all of these actually support the SMI interface. We'll shift to one aspect or let's say one implementation where the SMI concept really comes in handy, right? Folks use Flagger as a progressive delivery operator for Kubernetes, right? So in case you actually want to implement some sort of candidate deployments, right? Where you actually have a secondary deployment where you will actually get out a bit of the traffic and then once you've tested it out, you actually then wanna let's say swap out. Or you wanna do A-B testing from a certain feature test perspective or you wanna do a blueprint as to where you have separate parallel deployments and you can actually test everything and then swap the environments out. Flagger helps you do that, right? And Flagger by virtue of integrating with the SMI interface can actually leverage any of the service meshes that support SMI to be able to provide you that capability. And so Flagger is a great implementation of a great example of us to how implementing SMI interface can actually come in handy for sort of recommendations. So let's now try and understand what an open service mesh is all about, right? So this is Microsoft's implementation of the service mesh that we discussed so far. So let's try and understand some of the key attributes of open service mesh. So OSM is a lightweight and an extensible cloud native service mesh. This is based on the on-way proxy, right? The CNCF project there. It implements SMI. It was created by Microsoft and was donated to CNCF about a couple of years back. So what are some of the key features that are offered by the open service mesh? It does offer traffic shifting. So if you wanna split traffic across multiple implementation of service, you can do that. Key aspect in terms of actually having secure service traffic using MTLS, it supports that. It also supports external certificate management solutions. So we'll talk through some of the certificate management solutions that the open OSM integrates with. Again, one key aspect that we discussed so far from a service mesh perspective was in terms of actually getting observability and metrics for your services. So we'll see as to how the open service mesh implementation actually offers support observability tracing metrics. And all of this actually works by injecting a sidecar in the other plans, right? So we'll see as to how the features are actually work out there, right? So if you look at the open service mesh features here. So the first component, if I just go on, let's say, from left to right here in this case. So the first thing in this case is, let's say the ingress traffic policies. So you would wanna control, let's say, what can or who can access the services that are deployed? Now, this is an example of, let's say, the services that are deployed. Or this is one of the sample applications that we actually have up on the OSM website. And I urge everybody to go ahead and try this out. So this particular implementation comprises of multiple services, just to give you a quick walkthrough. Let's say there is a book buyer service, which is the legit service that should be allowed to go in and actually talk to the bookstore service and actually buy books. The book thief is a service that should not be allowed to do that. That should not be allowed to go and talk to the bookstore service. The bookstore service then talks to the book warehouse, which is, could be a database implementation. And then there is egress out there. So that's, let's say the flow path for one of the sample applications that we have as a part of the open service mesh, on the open service mesh website. So what are the OSM features here, right? So first I had discussed was in terms of the ingress traffic policies to be able to control which one of your services is actually accessible or ingress. This point two out here indicates the fact that there is automatic sidecar injection of the on-vibroxy. The minute you configure a particular namespace as being monitored by OSM, we will have the on-vibroxy injected automatically. So it's the book buyer, the book thief, the bookstore and book warehouse. All of these namespaces are being monitored by OSM and hence we have the on-vibroxy injected into those nodes. And obviously this by, because of the fact that we actually injected the proxy there, they are the ones responsible for actually looking at what sort of traffic policy has been applied, whether it could be routing or whether it could be from, let's say access perspective itself, right? So that's what the automatic sidecar does there. If you look at point three out here, the scenario that I was talking about was in terms of allowing the book buyer to talk to bookstore but not the book thief, right? So that is where you have service-to-service level access control. And then let's say point four is in terms of, let's say if you have two versions of the bookstore service, right? So you may actually wanna do, let's say progressive delivery, you may wanna switch from bookstore one to bookstore V2 using any of the methods that we discussed earlier. So that traffic splitting is done by open service mesh. And again, controlling in terms of, from an egress perspective, at as to when there is outbound communication, you may also wanna control there in terms of who's allowed to talk outside and not. And point six out there, the observability capabilities that we spoke about, right? To be able to track the metrics reported by these services, you wanna trace your service-to-service loyal communication and access the logs, right? So all the key capabilities that we discussed so far as part of service mesh are actually covered by open service mesh, right? So let's try and understand as to how that is the implementation of voice in itself, right? So here what happens in this case is we do have a proxy control plane. The proxy control plane is the one that's actually responsible for going in and talking to the proxy, the sidecars that are actually being injected. This communication happens again over a secure channel or MTLS and this communication is required for you to be able to actually, let's say, flow down your config, your policies from the proxy control plane to the actual proxies that are sitting on this notes. What you also have is a certificate manager. Obviously this is, which actually helps you provide MTLS between your service-to-service and I said, you can actually have multiple and different certificate components actually being literature. The endpoints provided out here helps you, let's say, communicate with different kinds of platforms, right? As we said, the OSN is actually available on multiple platforms. It's available on AKS, which are managed Kubernetes offering. It can also be installed on any Kubernetes cluster, right? So the endpoints provider actually helps you manage based on the endpoints that you're finally going to be talking to. The mesh specification is the one that actually takes up all of these components out here that you see in the service mesh controller. Packages into a structure that can actually be relayed back to the Envoy and for that information to be configured on the Envoy itself, right, using the Envoy itself. Yeah, so in terms of, let's say, MTLS support, we do have MutualTLS for port-to-port encryption. We do have the version 1.0 released some, a little bit earlier this year. So we do have support for, let's say, OSS upstream. If you install OSN yourself on the Kubernetes cluster, it's supported along with AKS too. The mechanism as we discussed is by using a sidecar, which is Envoy in this case. It works at layer seven, that's how you actually get HTTP-based access control. You have access control policies we discussed in terms of blocking service level communication and the installation methods would actually change based on where you're looking at installing the open service mesh, right? So in the next couple of slides, I'll actually go through in a bit more detail in terms of how the open service mesh deployment would actually vary based on where you're going for OSN as an OSN on your own Kubernetes cluster or on AKS. So one of the key considerations that we discussed earlier at the start of the session was in terms of what is the overhead that the introduction of a sidecar of a proxy is gonna cause, right? So what does it mean from a resource consumption perspective and a latency perspective? So the numbers that I've pulled in here are from a set of load tests that I've done with along with Istio. I think the mesh in this case was about 1,000 hard services, 2,000 sidecars and about 70,000 mesh-wide requests per second. So based on the tests that were run, this is what the summary information looks like, right? So in terms of adding latency is about 2.65 milliseconds. That's what the proxy adds. And in terms of memory consumption and CPU consumption is about 0.35 CPUs and 40 megs per about 1,000 RPS, right? So be mindful of this one. This may not really impact in most of these cases in most of your scenarios, but you should definitely consider this the impact while designing your solutions. So what we've done for the Open Service Mesh is obviously there are a lot of components that we deploy as a part of the Open Service Mesh installation itself. And what we've done out here is for the parts that are actually installed to support OSN, we have specified certain default limits for CPU and memory. These are documented on the OSN website. I suggest you have a look at it. I've just pulled in the latest default configuration and it's pretty much the same as what it was sometime back that hasn't changed, but this also gives you a sense of what is, what are the resource consumption? What's the resource consumption for the components that are actually installed as a part of the OSN installation process itself, right? Now let's focus on managed Open Service Mesh, right? So if you are looking at Kubernetes clusters, which are up on Azure, or if you're looking at Azure Arc Enable Kubernetes, we do have the option of actually getting a managed OSN in there. The managed Open Service Mesh is fully managed and supported by Microsoft. And these, and you can actually install the managed version of Open Service Mesh by an add-on on AKS, and at the same time there's an extension that you can actually do for Azure Arc Enable Kubernetes. Both of these components as in both the implementations for both Azure Kubernetes Service and Azure Arc Enable Kubernetes, they are both GA, and then you can actually look at the docs online the website for some more information on these components. So let's try and understand the OSN differences when you're looking at the managed version, right? So the first column out here is the managed version, which is currently running on let's say AKS or Arc Enable Kubernetes, which would be running, as I said, by the AKS add-on method. Or the other scenario is where you actually look at installing OSN, you can do a self-install on any Kubernetes cluster that you're running, right? And this could be outside of the Azure Arc managed Kubernetes clusters. So for the Arc components or for the AKS components, you can install OSN very easily by just going and actually enabling an add-on. I can just show you as to how that's done on the portal as well as as to how we can do that using CLI, right? Whereas in case of the self-installed option, you need to actually maybe install the OSN CLI and then go run some OSN commands to actually install OSN on your Kubernetes cluster. One difference out here is in terms of the OSN components, in case of the AKS and the Arc Enable Kubernetes, they get installed to the Kube system in space. Whereas in case of the OSN self-install option, they actually get installed to the OSN system in space. Shouldn't really impact, but yeah, I mean, if you are, if you've written some scripting around it, you may wanna let's say query the right namespaces to get your information. The managed OSN versions, which are actually running, which work with AKS and Arc Enable Kubernetes are fully supported by Microsoft and you can actually raise an Azure support ticket in case you see any challenges there. For the community supported, in case of let's say the OSN self-installed version, the support is from the community and you will have to go down the GitHub issues route. Today is an issue and rely on the community to actually support you in case you have a challenge. There is no OSN dashboard in case of the managed version, whereas there is one in the case of the self-installed version. In terms of features and capabilities, all of the capabilities that we spoke about, which is in terms of MTLS, traffic routing, right? So all the access policies, the split policies, observability, all this capabilities are available in both the plan of the managed OSN one or the self-installed version two. In case of the managed version, we do have let's say a self-signed saved stressor, whereas in case of the OSN, the self-installed version, you do have the option of actually going down the route and you do have the option of different certificate managers. So before we just do a walkthrough, so a quick sense, right? As we spoke about, do you really need a service mesh, right? We have dapper in this space too, and let's try and understand what are the primary differences and when would you go for dapper on when would you go for OSN or can you actually look at going for both, right? See, if you look at this particular diagram out here, the capabilities definitely do have an overlap, right? And if you look at the overlap, that's primarily from let's say, the secure service to service communication, which is MTLS, the observability part, right? And the tracing, all of these capabilities are part of the OSN as well as dapper, right? So OSN has additional capabilities in terms of the traffic routing and splitting, whereas in case of dapper, those are not natively, but you may actually work along with an English controller and do that, right? I think one key aspect that you need to actually consider out here is that dapper is not a service mesh, right? The OSN is a proper networking service mesh that we actually have here. Dapper who is meant to help provide building blocks to make it easier for developers to build applications as microservices, right? So think about dapper as being more dev-focused, right? And OSN as being more infrastructure-focused networking component, right? Do both coexist? The answer is absolutely yes, right? But if you end up using both together for some of the capabilities to ensure the common capabilities are not turned on twice, right? So in case you're actually using dapper along with OSN, ensure that you use the MTLS encryption capabilities of only one of those components that are not both at the same time. So let's start off with a walkthrough on setting up OSN on a Kubernetes cluster. So for the sake of this walkthrough, I've considered the same demo scenario that I spoke to you about. This is a sample application that's available on the Open Service Mesh website, and it's very easy to actually set up this bookstores sample application while that template's given there. So let me just set up the application for you. So there is a bookbuyer service, and there's a book thief service. And both of these services currently are capable of actually talking to the bookstore service and the bookstore service finance at talking to the book warehouse, which lets you could be talking to your database components. So in a Kubernetes cluster, you could have all of these services being deployed, but from an application perspective, it's critical that only the bookbuyer service is allowed to talk to the bookstore, and the bookstore is the only service that's allowed to talk to the book warehouse. You wanna prevent the book thief service from talking to the bookstore service. Let's take that as our objectives and see as to how we can actually go about implementing that using OSM, right? So let me just walk you through the steps here, right? So for setting this up, the first thing that you need to do is the walkthrough here that I have here is I've gone down the route of actually using the managed OSM offering, right? And because I had a Kubernetes cluster on Azure and make this cluster in place, I've actually gone down the route of setting up the managed OSM offering on the AKS cluster. And as discussed earlier, the option to do that on AKS is by just enabling the add-on, right? So key thing as compared to other service mesh offerings is that you don't need to really set up the entire infrastructure required for the service mesh itself. What you can do here is just by specifying that you actually wanna enable the add-on for open service mesh. The installation configuration process will actually go through and it will deploy the relevant port, the relevant parts, the proxy control plane and so on, right? So let's have a look at the steps here. So it'll take you about a few minutes when you actually run this particular step, but once that's done, and if you actually browse through all of the parts that are actually set up, you will see, let's say, a lot of them being set up, as said, the ones that we get installed as a part of the add-on route, these particular parts will actually go and sit in the cube system name space, right? Unlike if you go for the OSN on your own Kubernetes cluster, it'll actually go into the OSN system name space, right? But here I've just filtered through, let's say the parts that I've got installed and we have the OSN injector, the OSN controller parts that I actually got created, right? And all of those have been the specified default limits for memory, CPU, et cetera, have been applied to these particular parts. It'll take you a few minutes, but yeah, once that's done, you should be able to, you should have the OSN set up should be up. You can actually then go ahead and check up on the Azure portal and it will show you the OSN configuration is done, right? I've gone the CLI down, but you could have very easily done this while the Azure portal do. So what are the next steps, right? We need to tell OSN about the namespaces that it needs to monitor, right? So what we did for this particular sample application is, first thing that we did is we created multiple namespaces. So we went ahead and created namespaces for the book buyer, the book thief, the bookstore and the book warehouse, right? So once these namespaces were created, we then went and told the OSN controller components to go ahead and actually monitor these namespaces. So you just do that by just saying OSN namespace add and you specify all the namespaces that you actually want to add here, right? One key thing out here though, is that, so you will actually get a message saying all of the namespaces were added to OSN and the same thing will get reflected on the Azure portal to, I didn't blow the screenshot up too much, but the minute you do this by the CLI, you will see in the OSN interface on the Azure portal that all of these namespaces are now actually being monitored by OSN, right? So what happened? So what we've done so far out here is we've enabled the add-on. We've created, we've deployed a sample application across namespaces. We then told the OSN to actually go and monitor specific namespaces, right? Now by default, when you actually get OSN configured, it does not block traffic by default, right? So if you actually had an application running and you went ahead and installed OSN, it does not mean that your current traffic will actually stop, right? I mean, the application will still be working at the end of the day, right? So that's the default deployment option for OSN. Now, when I actually go ahead and open up, let's say the book thief website or the book buyer website in this case, both of these services are actually constantly polling and making a service call to Bookstore, right? So what happens is if I just take the default setup deployment of OSN, we can actually see as to what's happening here in this cases. The book thief services working, right? You can actually see the counter continuously incrementing. The same stuff happens with the book buyer services. Both of them will be able to talk to Bookstore. And the key thing here is both of them are actually talking to the Bookstore V1. We've just deployed the Bookstore V1 so far, right? So key thing is all traffic goes through properly enough, right? And then OSN is still not doing anything from an access policy perspective. So the next thing that we'll actually do is go ahead and look at from a traffic access perspective, right? So I do have this amul here in this case and what we're trying to do here and out here right now if you look at it, we have the Bookstore service and we specified as a part of this particular access policy that only the book buyer service can go and actually access the Bookstore service. Now it's very easy to apply this particular access policy. So I've just gone ahead, taken the sample policy and now applied this to the cluster. So once this happens, what we effectively have done is we've ensured that all calls to Bookstore are only from the book buyer service and not from the book thief service, right? So consequently what's gonna happen is the websites that I have, the book buyer service is still gonna be able to go ahead and actually pull the Bookstore service and be able to get response back whereas the Bookthief service is not able to call the Bookstore service anymore and effectively the number of books stolen that number still stays stuck at that particular point, right? So implementing a simple access policy about controlling which service can access, which service was very easy to implement using the YAML itself, right? Pretty straightforward implementation. So what we also did was just to show you from an MTLS perspective, right? One of the key capabilities that we've been talking about is the service level encryption and security. So once let's say we do have a OSM in the picture and I have MTLS going please, I've just picked up, let's say, a Wireshark capture and if you look at this Wireshark capture you will see that there is service to service level communication which is locked down by MTLS, right? So we can always run Wireshark within those two IP addresses assigned to the spots that you can actually see that the traffic are going between those two particular services is actually locked down. So yeah, so we looked at the traffic access perspective. We looked at MTLS. Now let's look at traffic splitting here in this case, right? So what we've done to the same sample application right now is we've deployed let's say two versions of the bookstore application, right? A bookstore V1 and a V2. And I said, you can obviously make this as a much more complex implementation by using flagger, et cetera, but here with basic service measure, so what we've done is we want to now get our traffic split in there, right? So this is what we are keen to do here in this case. We want to route maybe X person or traffic to bookstore V1 and Y person to bookstore V2. So all you need to do is have an ambulance place. You can actually specify in there what is the split routing that you're looking at, right? So in this case, the YAML out here talks about February 15. Take this particular YAML, apply to your Kubernetes cluster and at the end of it, you will actually see the split happening. So there now, if you actually look at bookstore V1 and V2, you will start seeing calls going to both, let's say bookstore V1 and V2 there. The other thing that you can actually do is from, also what you've done out here is for the managed OSM versions, we've integrated that with Azure Monitor too. So on Azure Monitor, we have something called an Azure Monitor Container Insights which can actually give you insights about your Kubernetes cluster. So what you've done right now is we integrated OSM monitoring also as a part of Azure Monitor Container Insights. It's in preview right now, but the key thing that actually allows you to do is you can actually filter and you inventory of all the services that are part of your service mesh. You can visualize and monitor requests that are going across services in the service mesh with what is the request latency? What's the error rate? What does the resource utilization look like? And it provides an overall connection summary for let's say the entire OSM infra that's running on AKS, right? So this monitor integration is one key aspect that's been done along with the managed OSM deployment on AKS. So yeah, because it's a part of the Azure Monitor, you can then go ahead and actually use KQL and start querying the monitor logs. If you want to pull up some metric information, you can do that. You can also implement Niagara-based tracing, right? So you can actually go ahead and do that too. You can also go ahead and integrate with Prometheus and Grafana, right? So if you want to do some metrics scraping for the OSM, you can actually go ahead and do that along with Prometheus. So what we've looked at so far is how can you actually go ahead and install the managed OSM offering on AKS? The implementation, as I said, on your own Kubernetes cluster is very similar. The deployment will differ a bit. You will go to the HelmChart router, you will go to the OSM CLI and installation router, right? So, but at the end of the day, all the steps that we showed in terms of let's say being able to do a traffic access split, a traffic access policy and a traffic split, those concepts stay exactly the same. Let's look at the roadmap down the line. So we did get OSM v1 released a bit earlier in the year. There is v1.1 that's out right now and there's work that's happening and v1.2 and 1.3. The roadmap is public and you can actually go ahead and look at the public roadmap on the URL that I provided here. We do have information displayed in terms of which are the ones in backlog, which are the ones targeted for the future, which are the bugs that we're working on currently. Some of the key upcoming features that we are working on are in terms of actually providing Windows container support and let's say some of the Azure specific integrations, right? The same Azure monitor integration that I showed you, which is in preview. We're looking at getting that into GA and we're looking at getting a bit more integration going with some of the ingress controller components, right? So we do have an AGIC on AKS implementation and application gateway ingress controller. Well, you can integrating that as a part of the OSM capabilities itself. Next, yeah. So if you look at the same GitHub site, you or the open service mesh slash OSM, you can then go to the issues. You can then group all the capabilities on milestone. You will get a good sense of what's coming down the line from a capabilities perspective. So currently, these are some of the key ones that are pulled out for the V future, right? So the dates for this is not logged on, but you will see some of the capabilities that we are looking for are being targeted as well. So that brings us to the end of this session. So thanks for taking time out. And yeah, I'm still open for questions. So in case you have any questions, please feel free to post them up in the chat. We will take those questions and answer them as best as we can. Again, thank you very much for taking time out for this session and have a great, let's say, enjoy the rest of the sessions.