 Hello everyone and welcome back to another episode of LearnLive with us. My name is Pierre Roman. I am a senior Cloud advocate with Microsoft and I am joined by Steve Buchanan. Hey everyone, Steve Buchanan here. Hey everyone, Steve Buchanan here. I'm a Principal Program Manager with Microsoft and I'm focused on Azure Product Improvement, former MVP for 10 years. So happy to be doing this LearnLive today. Yes, and the keyword right now is live because just in rehearsal, we experienced a few little glitches with networking and such. So bear with us. We'll try to get through it without any further issues, but if they are, we'll just deal with it. This is live TV. Live. Of course, you can join us on Learn, Microsoft Learn and follow along with the Learn module that we're covering today. We're actually covering today, the Introduction to Azure Arc Enabled Kubernetes. Steve, are you an expert in Kubernetes? Because I'm an operations person. I know how to manage it and how to deploy it, but not how to actually work with it. I'm not a deep Kubernetes person. Arc, I can talk about. How about Kubernetes? Is that your bag? Yeah. So Kubernetes, I can spell it. So I'm a little bit ahead there, right? I can at least spell it. That's right. I think I can spell Arc too, so but I'm working on that one. So as I mentioned, aka.ms-learnlive-2022-0505-A, Senko D'Amio, happy Senko D'Amio to all of you. So if we follow along with this one, we're going to be covering that whole module unit by unit, step by step. We're going to take some liberties because this is live and we're going to add our own flavor. So go into it, please complete it, check your knowledge check at the end, and maybe you got a few points so you can prove to your coworkers that you're better than them. Yeah. And if you sign in with your profile, it tracks your progress, right? You can, if you start a learn module, you can come back to it later. So it's really good. All right. So what are we going to cover today? Today, we're going to cover, first of all, we're going to describe Kubernetes, Azure Arc and Azure Arc enabled clusters. We're going to connect the Kubernetes cluster to our Azure Arc infrastructure. We're going to manage the Arc enabled Kubernetes cluster by using GitOps and you're going to run us through this. And we're going to integrate Azure Arc Kubernetes with other services, which is the beauty of Arc and we'll get that, we'll get through that in a little bit, but before we go, I want to say thank you to Dwight, which is our producer. If you have any issues, just put it in a chat and Dwight will take care of you. If you have any questions, please feel free to ask throughout this session. If we can get to those questions, we will. That's the beauty of live. You have questions, put it in. Dwight will feed it to us and then we'll answer those as best we can. All right, let's jump into the intro. Of course, there we go. So the introduction. Containization is one of those things right now that as a big portion of the trending, so containers, Kubernetes, Docker, like any of those containerized solution helps with scale, helps with management, helps with redundancy, helps with all kinds of different things across public and private cloud. But when you're running it in private cloud or in on-prem, for example, or in somebody else's cloud on a VM and you want to have that single pane of glass, I hate to use that term, it's such marketing, but that's a single place where you can see all of the what's happening, the health of it, how it's performing, how you're deployed to it in one single place. Connecting it to Arc is a fantastic solution. So we're gonna get into that. So let's say to scenario that Contoso, our favorite fictitious company, has a Z-quarter in London and offices in multiple other cities, and they want to deploy an application or manage multiple applications across multiple offices, but from a centralized environment. And this is where Kubernetes and Arc become useful. So anyway, let's jump right into actually what it is. So Steve, how about you take us through what are Kubernetes and Azure Arc? Yeah, sounds good. So before we go into Azure Arc Kubernetes, we need to really understand what Kubernetes is, right? And really, before you go into Kubernetes, you should have a good understanding of what containers are. We're not gonna dive deep into containers, maybe we'll touch on that briefly, but as far as Kubernetes, you need that when you have a massive amount of containers and let's actually go to the next slide. Let's go to the next slide. So when you have a couple of containers in your organization or you're testing some stuff out on your machine with containers, right? You don't really need a full orchestration system. You can get away with something like Docker desktop or there's other solutions out there, but let's say your organization wants to run containers in production and their critical workloads and or maybe you have a mass amount of containers at this point, right? We're talking hundreds, maybe even thousands. You need something to help you orchestrate spinning those up, moving those around to different nodes, making sure they're online, helping with the load balancing. And there's a lot of other things that come into play with that as well. And that's where Kubernetes comes in to the picture and it's a full orchestration system for orchestrating your container workloads for your organization and in production. And so what we're showing here on the slide is just kind of a high level. And I pulled this from one of the learn modules actually, but it's a high level graphic of Kubernetes and a node. And by the way, there are learn modules out there on Kubernetes. So you can go out on Microsoft learn, learn about Kubernetes if you're totally brand new to that, and then jump over to this learn module and learn about Kubernetes and Azure Arc. So there's more than just Microsoft technologies out there in learn. But in the image here with Kubernetes, you have something called a control plane and it's consists of a bunch of nodes that are running services that are critical to help with the orchestration of your containers. And those nodes are simply just servers or VMs in the case of virtualization or running on cloud. So you have your control plane and then you have your node plane where you are running the actual nodes that run the workloads. So they're gonna be running your container runtime and other services that facilitate things like networking and being able to talk back to that control plane without going too deep into Kubernetes here if you're brand new to it. And then I have a little graphic there that shows a Kubernetes node and it kind of breaks down what a pod is. So a pod is another concept in Kubernetes and that's where your workload is actually running. Now inside of that pod, you can have a single container or you could have multiple containers. And that's what I'm showing there is that you might be running a front end for a website and this isn't common, but you could be running a database as a container as well. You could actually be running those in a single pod or you could be running those across multiple pods. And so those pods are actually gonna run on a node or nodes. And so if one of your nodes, which again is a server, right? If that becomes unhealthy or you have too much on there and it's overloaded, Kubernetes is watching that from the control plane. And what it will do is actually take your workload so your pod or pods and it'll move it over to a healthy node. And it's gonna do all that for you automatically like in the background. And so that's part of the power of Kubernetes. Now, this is super high level. We could spend three hours talking about Kubernetes and then we'd all be confused afterwards, but it's a huge topic, right? And so with Azure Arc enable Kubernetes and things like AKS, we'll get into that Azure Kubernetes service. The idea is to try to simplify Kubernetes, make it easier to do. And so let's move to the next slide and let's talk about some of the benefits of Kubernetes. So we talked a little bit about pods and so in Kubernetes you have self-healing pods. So if there are problems, if there are issues, they will try to heal themselves. They'll move to other nodes, they'll like restart themselves and you can put in other logic to try to recover from errors or issues. Auto scaling is another big one, right? So let's say load increases on a workload and you need more pods, it will scale up for you automatically and scale down as needed. You could scale the pods, you could scale the nodes. So there's a lot of capabilities in scaling, rolling updates, rolling them back, auto discovery of new pod deployments. And so you might have things like monitoring and we're gonna cover monitoring later. As new pods come online, you could have ancillary services kick in, right? And start to monitor or if you have a service mesh, you can have those pods picked up and things like that. And then also load balancing. So Kubernetes comes with networking, that's a huge topic in itself. When you spin up pods, you can have cloud load balancers spun up, you can have like ingresses spun up or internal IPs as well as DNS comes into play as well. So the Kubernetes tries to automate as much of that as possible. And on a cloud provider, we try to make it even easier than it is generally on Kubernetes, but it's still a huge topic. So let's actually go to the next slide if we can Dwight. So Azure Arc, right? Where does this come in? How does this play into the picture? So you might have Kubernetes running on-prem, you might have it running in another cloud. You might even have some Kubernetes clusters running in Azure, right? Whether you rolled your own, you deployed Kubernetes on some VMs or maybe you're using the Azure Kubernetes service. Many organizations actually have Kubernetes and workloads spread across different environments like that. And so there's a need to be able to manage those wherever they are. And that's essentially what Azure Arc gives you, but it doesn't just give you that, it basically extends Azure's capabilities to environments that exist outside of Azure. And it extends those management capabilities that you have on Azure as well. So things like Azure Monitor, Azure Policy, as well as some of the other things like tagging, being able to put objects in resource groups. And so we get those capabilities when we extend Azure Arc to workloads that exist outside of Azure. For example, Kubernetes. Let's go to the next slide here. And so Azure Arc can extend to on-premises, whether that's in your data center, maybe it's a Colo, non-Microsoft clouds or non-Azure clouds, if you want to term it that way. For example, AWS and GCP, it will work with the other clouds as well. And then also Microsoft Hybrid. So if you're running Azure Stack Hub, if you're running HCI or you're running some Edge, Azure Arc can extend there and has deep integration to help you bring some of those management capabilities and some of the Azure functionality that you may not have yet in an Azure Stack Hub or Edge, bring those to those environments, if you will. So also on this slide, you'll notice I've created manage. You can use Azure Arc to create certain resources and even workloads. Like we're gonna show you how to deploy an application on an external Kubernetes cluster using GitOps and Azure Arc. So that'll be coming up later. Let's move to the next slide here. And so this was just kind of going back to what Pierre referenced is that businesses and their technology landscape, they're becoming more complex, right? And more and more companies are running apps across environments or running workloads across environments. If you have the need to run something on-premises and you can't move it to cloud for a number of different reasons, you have to leave that there maybe for compliance. It doesn't mean that you can't leverage cloud technologies on that workload, even if you have to leave it on-premises. We can use Azure Arc to extend to it and bring those capabilities to those workloads. It's also, Steve, it's also a great tool to simplify your management across multiple platform. Because if you were doing, let's say policies or monitoring or anything that's like operation once your cluster is deployed and your application's been sent in, but if you're doing it on AWS, you'll have the AWS type of tooling and then you have the GBC type of tooling, then you have the Azure tooling. And if you're doing it on-prem, you have a different set of tooling. So it becomes a little bit complex, as you mentioned, to bring all of that, like it, to apply uniform policies across all of your containers, to apply monitoring across all of your Kubernetes clusters or wherever they may be. And I think that's, in my opinion, is the strength of Arc is to connect all those parts and bring them into a single, and to apply a single set of tools to it. Yeah, yeah, definitely exactly. And if you think about it from a skills perspective, whether you're maybe an IT director or a CIO, or even if you're a sysadmin or DevOps person, if you're used to using the Azure tool set, the Azure management features and capabilities, and you like Azure, but you know you're gonna run some workloads in other places, you can utilize Azure Arc and leverage the skills that you already have. Or if you have a team that's skilled up on Azure already, you can use Azure Arc and still run your workloads where you need to, but your team is already skilled up on Azure, so it reduces the complexity. Yeah, and to answer, I'm trying to read the name properly, Audrey Bandarev. I'm really hoping I didn't mess that up, Audrey. And so Azure Arc is like a facilitator. It's not, it doesn't do the management of it, but it connects all of your parts, whether it's a VM, a Kubernetes cluster, in this particular case, what's the subject of this Learn Live session. So it connects that cluster to Azure so that you can apply other services, such as Azure Monitor, Azure Policy, Insights, collect your logs and metrics, like all of this type of operations is possible because Azure Arc has made that connection and is managing that connection for you. So Azure Arc is not the part that manages Kubernetes, but it's the part that facilitates other tools to manage your Kubernetes cluster. I hope I answered that question for you. Sorry, sorry, I cut you off there, Steve, go ahead. No, no, that's good. And someone Amy asked about if this applies to Azure, or to Azure Kubernetes service as well. And the answer is yes, you can actually bring AKS clusters into Azure Arc. You wouldn't necessarily need to because they're already on Azure, so you already have those capabilities, but let's say if you want to be able to view all your Kubernetes clusters in one interface, yeah. There's definitely a use case for that. So that's a great question. Let's move to the next slide. So this is just at a high level, Azure Arc is kind of overarching and it can manage many types of workloads, right? So it goes beyond the scope of just Kubernetes. And I think it's important to make sure that everyone understands that. So you can use it to manage your servers, regardless of where those servers are, things like SQL server, even IoT Edge, and there's some data services capability and there's more capabilities being added to Azure Arc all the time. So it's just important to understand that you can do this. And maybe you're gonna start with Kubernetes. Maybe you're starting with servers and then you're gonna start to extend Azure Arc to Kubernetes, right? So there's a lot of capabilities here. Let's move to the next slide. So something that's important to remember, wanted to call out and make a note of. So the format of the Kubernetes manifest files that are officially supported by Arc are YAML. So keep that in mind. And when you're working with Kubernetes, you generally use YAML, you'll be writing YAML files. It's highly recommended to go out there and start to learn YAML. YAML is very powerful and you can do a lot of stuff with it. I'm just, I just have a problem where a space anywhere in your file will actually break your configuration. But that's just me. I recently saw a T-shirt on Twitter where it had the acronym YAML and then it showed yelling at my laptop. So keep that in mind as you start to go down that road of your journey of exploring YAML. But YAML is used with Kubernetes and it's supported by Arc. So highly recommended to start to ramp up on that. So let's go to the next slide here. And so some of the key benefits of Azure Arc, we've already talked about some of these, but the ability to organize your resources in one place, putting them in resource groups, having tags. So you can do things like cost centers, you can show where objects are located, stuff like that, what teams they belong to, et cetera. A single place to organize your assets. Once your objects or your resources are in Arc, you can use like resource graph to be able to pull reports or search data. You'll be able to access through log analytics and run queries and whatnot and get a consolidated view of your Azure and Azure Arc resources. So your Kubernetes clusters no matter where they are and then delegation of permissions, right? So once you pull resources into Azure Arc, now you have the power of Azure's RBAC capabilities as well. So let's go to the next slide. That way RBAC is key, in my opinion, especially on the security side of things, because you could actually tell, like you could basically give, hopefully there's no managers that are listening to this particular session, but you could give your manager read-only so that they can see the status and everything that's going on in your, but they can't actually change anything. All right, next slide, next slide. So also once you have objects in Azure Arc, you can enable them for log analytics. So like Azure monitoring for containers and we're gonna cover that later. And so data about your objects is being pulled into log analytics and you'll be able to run queries on that data. So it's very, very powerful. And then of course the monitoring piece and we'll talk about that later. So we're not gonna deep dive here on that. Let's go to the next slide. We're at the knowledge check already. All right. So what is the format of the Kubernetes manifest files that are supported by Arc? And if you are following along with us in the Learn Live, you can answer it there or you could put your vote in at attps aka.ms slash polls and then we're gonna keep track of how many people are actually answering and what the answers are. But in this case, I think we only mentioned what one type of files that are supported by Arc for the manifest file. And I think it's YAML, the one we hate. Let's take it out here. It would be that one, right? It would be that one. It would be that one. It would be that one. I don't know if Dwight's has the results of the poll ready. If you do, just bring them on. But if you don't, we're gonna move on to the next question. Next question is, which technology makes possible to deploy Azure Kubernetes services, aka in on-premises data center? This is a good one. Which one do you think, Steve? So I'm gonna go ahead and give you a quick answer. I think, Steve. So I'm gonna go with HCI, Azure Stack HCI. Yeah, because we really haven't talked a lot about the other ones. So I'm not quite sure why we put them in the options. And you are correct, my friend. X gets the square as my friend Amy would say. All right, so we're talking a lot about what is the architecture components and characteristics of Azure Arc. Like we've talked about what you can do with it. We've talked about kind of like the benefits, but let's get a little deeper into what Azure Arc actually is. Azure Arc is an agent. I like to refer to it if you're a Lord of the... Are you, Steve? Are you a Lord of the Rings fan? I am. It's been a while since I watched those movies, but yes. So I refer to the Arc agent as the One Ring. The One Ring to rule them all. Nice. And what I mean by that is once you've installed the Azure Arc agent on a machine and you deploy other services to it, which you could do theoretically without Arc, so you could manually figure out which agent you need to install and how to configure it for monitor, for policy, for all kinds of other services. And you could do it manually, but your management, so every time an update comes to the agent, you'd have to manually go back to all these servers and update your agent properly. But the agent on Arc will do that for you. So it's the one agent that sits there and then listens to Azure Arc as to what configurations change. So, oh, I'm onboarding into monitoring. Let me grab that agent, let me install it properly and now we're gonna start feeding back your data. So I find it, I call it like the One Ring, the One Agent to rule them all. And when we're looking at that particular agent, the way it's done is what we call it, the Azure Arc connected machine agent and it has parameters in terms of subscription, location, resource group, whether or not you're running through a proxy because if you're in a secured environment, you don't want to have possibly, well, you don't need to, but it's up to you whether or not you're Kubernetes cluster or your VMs or whatever it may be that you're managing with Arc, you may wanna go out through a proxy. So that has that information and it has a Azure service principle because you can delegate or it needs certain rights to be able to actually connect your machine and manage it and upload and download, download configuration, upload data and metadata. It handles the managed identity of your machine because that's one of the things that Arc does is once the agent's installed, it creates this digital identity of that machine regardless where it may be or that Kubernetes cluster. And then the surface is up in the Azure portal or in any other Azure tool. So if you're using Azure CLI or if you're using PowerShell or the Azure Shell, the Cloud Shell, you can actually get info on those connected devices because that's managed identity has been created by the agent. It actually also uses a guest configuration management to provide policy, configuration functionality, all of that to and from that VM. And it also, as I mentioned earlier, it has an extension manager to figure out which agents are needed, which need to be updated and to manage all of that for you. And it sends that to Azure, always over port 443, so HTTPS. One of the beauty is that it's always a push. So your machine, if it's on-prem or if it's in AWS or anywhere else, and you're putting that agents on it, it will actually push that data out. So you don't have to tell your security folks to open up a, make a hole in the firewall for Azure to come in and pull that information out. Azure always sits there and it waits for the agent to connect with it in order to update or to transfer that data. That's about it for the connected agent. What you need in order to actually connect a Kubernetes service cluster or AKS or that's running either on Azure Stack HCI or on others, you need a connection to Azure. So if you're in an air gap network, sorry about your luck, you need to actually have a connection. If you're, you need to actually deploy agents as pods inside the clustered namespace. And that one is actually the default namespace. It needs to maintain an authenticated connection to Azure and synchronize the cluster, the metadata. And we do that by creating a service principle. And when you actually connected the first time, you just pass on the service principles credentials. So you don't have to actually store credentials anywhere. It's just the first time that it's created and then it connects. It will gather that metadata including cluster version, nodes, Azure Arc version, all of that good stuff. And then it will start orchestrating those interaction. And if you look at it. One quick thing to point out, you mentioned if you're in an air gap environment, it won't work. Absolutely true. I was speaking at a conference this week and this question came up at the conference. So I think many people will have this question, but will, can you use Azure Arc over express route or over, like say someone has site VPN to Azure? And the answer is yes, you can. Absolutely. And so I think that's important to know for those customers out there that maybe you have workloads on premises and they're, you know, they're private. Maybe you have that environment pretty locked down, but then you have that express route. Yeah, you could totally use Azure Arc in that scenario. Yeah, you're absolutely correct. And so the way I would tell customers if it's a development or testing phase, then just use, if you don't have like sensitive information inside, just use the open internet. Doesn't cost you anything. You get to test and prepare. Now, if you go to an actual production environment, express route is more secure and more robust because it's an MPLS segment that's terminated at both ends, one in your data center and one in our data center. That way you're bypassing public internet and making it a little bit robust. So that's express route. And site to site VPN is kind of like the in between, not as expensive as express route, but not quite as reliable as express route. So it's that middle space, but you're correct you can use any of those technologies to connect to Azure, but you must have a connection. Yes. Perfect. Anyway, so we looked at the architecture of Azure Arc enabled Kubernetes. It's really not that complicated. Once you have your Kubernetes clusters running, if it's running on Linux, that's fine. You just have to install the proper agent. If it's running on Windows, that's fine. You just have to run the proper agent. And then once those agents are installed or the script to onboard it, we'll install those agent and make that connection for you. So the benefits of using Arc enabled Kubernetes, regardless where that Kubernetes cluster may sit, is that enhanced support for automated updates to cluster configuration, automatic deployment configuration residing in like a Git repository, whether or not it's on GitHub or any other Git repo, you can do this. The one being the operation guy is enforcement of policies. I wanna make sure that, for example, you don't have any Kubernetes workloads that are answering to not HTTPS. So you can have a policy that will do that. You can have a policy that says that's not to use anything in the clear or to have all of your data encrypted and so on. So as a ops guy, the enforcement of runtime policies, super, super important. And centralized monitoring. So for all of my workloads, regardless what those workloads are, they all report into Azure Monitor and Azure Monitor for containers. So we not only have visibility on the machine running the Kubernetes cluster or the nodes, but we also have visibility inside the container to see what the health of those containers is at. So in that, all in the same kind of like tooling. So for me, those are the big benefits of using Azure Arc enabled. So let me jump into a very quick demo of how we onboard a server with the connected machine agent and Dwight, if you could start the video and the reason I made this into a video is because it takes a little bit of time once you actually deploy the agent for the agent to start reporting into Azure. And I didn't wanna sit here and wait for it. So Dwight, if you want to run that. So we start from the portal and basically in the resource group that I'm in, I just add a new resource, go to IT and management tool. Select Azure Arc. And at that point, I can pause it here. I can add Azure Arc agent. It had three, it went a little fast, but it had three boxes. One was deploy a single server. One was deploy multiple servers where in that multiple servers using the service principle, you actually put in the service principle ID. And then it will, so you could apply this through any, like whether you're using SCCM on a Windows or Papato Chef or any of those operations management platform, you could actually deploy that to all of your machines based on that. I selected the add a single server to Azure Arc and click next. And then it'll start asking me questions such as where do I want it? So I want it into my resource group demo one. It is running on a Windows machine and it also asked me whether or not I've got a public endpoint or proxy. And then the tagging. Tagging for me is very important because you can use tagging in order to affect stuff and to group machines. So for example, if you've got a bunch of machines that are tagged as demo, which is one of the tags that I typically used in terms of the lifecycle is either persistent or non-persistent. And the category is demo or personal or Pala production. And then based on that, you can actually create automation routines to shut it down when it's not needed. So you don't have to pay for the resources, do updates on a regular basis of whether or not you're doing rolling updates to different data centers so that you don't, if an update breaks, that you don't break all of your deployments. You only deploy one at a time. So those taggings is very important. Once we get there, we have the script that it gives us where you could either download it and save it to your machine or you can copy it and run it directly onto your environment. So you download, installs the agents and then connects your environment with the Azure Arc environment. So you open the PowerShell script. In this case, it's PowerShell because I did a single server and I identified it as, excuse me, as a Windows machine. So I run PowerShell in an elevated status as administrator, I go to my download because that's where I put it and I run the actual onboarding script. It'll go and I sped up the video here because it typically takes a few minutes and downloads the proper agent, installs them. Once it's installed, makes that connection to Azure, then will ask me to authenticate. If I had picked the second option to put in the service principle, it would have authenticated for me and it was done. So this one, I just opened the browser on another machine, went there, put in the code authenticated and now it's made that connection. The digital identity of that server is being created in Arc and then replies back to our environment. So if I go back to my demo resource group and refresh my environment, you'll see that I have that server one that is currently now there. Now, if you notice, I've got my server one but I also have a Ubuntu machine and I have a service two and you can see that they're co-located in that demo one resource group as a resource group is nothing but a logical container that where all of its resources within it share the same life cycle. That's the way I look at resource group and you can identify which machines are actual VM machines and which are not Azure VMs by the icon that is next to it. So it's a visual queue, but also in the type where it says an Azure Arc server. So if I jump back to my Dwight, wanna jump back to my shared desktop. Okay, so now we're live and I'm in my demo one environment and I can see in my type all of my Kubernetes which we have not onboarded yet that's just because I've already created one for the demo that we're gonna see a little later, but I have my virtual machine and I have my Azure Arc machine. So we've onboarded those machines and they're ready to go. Now this is not yet onboarding a Kubernetes cluster into Azure Arc. Now we've actually onboarded the hardware or the VM or the host regardless what kind of host it is into Arc, but that host, because so we wanna monitor, let's say the CPU of that host, we wanna monitor the storage of that host on which your Kubernetes cluster is running. It's always a great idea to monitor a machine or a workload from end to end. So we're doing it this way. All right, so now we've onboarded those machines and I can click into one of them and it'll give me the information as to what kind of machine it is, the operating system version, all of that metadata and all of the tags that I have. And as you can see here, I can enroll that machine to update management and to log analytics, so to collect logs and metrics, monitoring insight policies, change and inventory tracking because change tracking is very important because when something goes wrong and you ask somebody, hey, what happened? What's the answer, Steve? I don't know. I don't know. I didn't touch it. I have no idea what happened. I don't know what happened. I didn't touch it. I didn't touch it. I didn't touch it. So with those types of services such as change tracking and inventory, you can actually figure out through auditing what happened. You could see that Steve modified the registry and everything blew up, right? Or you edited the YAML file and put a space into it and broke it. Yep. Yeah. All right, Ty, let's go back to the slides. Make sure I'm at the right spot. Now we can demo, we've done our demo. Now let's test our knowledge on this particular part. So which namespace host is the Azure Arc enabled Kubernetes agent? And I didn't mention that. This is a good one. Is the Azure Arc. Actually, I misspoke because I did say it was the default one. The default one is there but the agent itself creates its own namespace and creates a, like on IPv4, for example, it'll create a VM where it lives and then inserts it into Kubernetes. My mistake. Number two, which of the following is not an advantage of enabled Kubernetes on Azure Arc? What do you think, Steve? This is a tough one. And I actually don't know the answer to this. So I'm going with D. Actually, you are correct because, wait, right to the slide. I wish that would happen every time I take a certification test. Like I would magically get the answers right. Even when I don't know it, that would be great. Because you're right. We talked about facilitating deployment of workload through GitHub or other repos. We did talk about policies and we did talk about monitors that have been advantages of Azure Arc connected anything. But that's the one thing I had to figure out is that we all have to figure out is how to actually install and set up your Kubernetes cluster wherever it may be. So that's a completely separate and disconnected process. Once it's up, then you can create, you can connect it to your environment. All right. So let's go to the next one is and actually connect that Kubernetes cluster to Arc. Yeah, and I'll jump in here. I'll talk briefly about what this looks like and then we'll dive into a demo. The demo is going to be live. So we'll hope that the demo was working. We'll see how that goes. So to connect to Kubernetes cluster to Azure Arc you need some things, right? You need to deploy the Azure Arc agents. And in the instance of Kubernetes it is called connected K8s for Kubernetes. Well, it's called connected K8s. These agents actually are deployed as pods and they run as pods in your external Kubernetes cluster. Now, as you're going through the learn as you're going through Microsoft documentation you'll see your external Kubernetes clusters referred to as projected Kubernetes clusters. The reason why is because we have an external Kubernetes cluster and we're projecting that into Azure Arc, right? As Pierre mentioned, these pods run on your external, I'm going to call it by the right term your projected Kubernetes cluster in the Azure dash Arc namespace. So you could go run your code control space get space namespace syntax, right? On your projected Kubernetes cluster and you'll see that namespace show up. So the diagram here is just showing at a high level that we have and I don't think I can move my mouse and folks will see that. But if you look on the far right you'll see your projected Kubernetes cluster and then you'll see your firewall as a customer, right? In between your on-prem and Azure and you can see how the traffic flows and you can see also that there's a heartbeat that flows back to Azure Arc to tell Azure Arc that your Kubernetes clusters are still alive. And also that's where it's sending that data like the metadata about your Kubernetes cluster up to Arc so that you can see information about your Kubernetes clusters in Azure in Arc. Let's go to the next slide here. Too many, too many screens, too many shares. Yeah. So this is just to give you a view of those pods that are running on the projected Kubernetes cluster in that Azure dash Arc space. So we're not going to dive into this but this is what you would see if you ran a Kube control, get pods in the namespace you would see this come back. And so let's move to the next slide. Also these are the ports that agent's gonna communicate with AKA those pods back to Azure Arc. So 443 and then 9418. So as you're looking at the requirements and do I need to poke a new hole through the firewall this is what you need to look at. So let's move to the next slide. And here are the prereqs that you need before you connect K8s to Azure Arc. So you need Kube control that's the Kubernetes command line tool. You need that installed. If you're running this, if you're gonna connect your Kubernetes cluster from your local machine you need Kube control installed. That leads to the second bubble. You need your Kube config file configured to connect with your external Kubernetes cluster. So if you're running this locally you need to have Kube control installed. You need to make sure you can connect to your external Kubernetes cluster from your local machine but if you're running this let's say from another cloud provider you might opt to use their cloud shell instead. So you just need to make sure that you can connect to that Kubernetes cluster from that other cloud providers cloud shell for example. You need to have Helm three we're looking at the third bubble on the right there. You need to have Helm three or above and I know we're at Helm three right now but you need to have that installed because we're doing some things with Helm and then you need the Azure command line interface version 2.15 or above. You need to have a service principle. So we typically create that ahead of time and then you need the Azure Arc you need the Azure Arc extensions installed. So the connected K8s and the K8s dash configuration the K8s dash configuration is actually only needed if you're gonna deploy applications via GitOps. Okay, we'll go a little bit more into that as well. So let's go to the next slide here. And so this is just the screenshot of what this looks like the actual syntax and I'll kind of break this down. So you're using the connected K8s extension that is a part of the Azure command line interface and then you specify the name of the Kubernetes cluster that you wanna add to Azure Arc. You specify a resource group that this projected Kubernetes cluster is going to land in. You specify the location, that's optional and you can specify tags. So it makes a lot of sense if you're gonna onboard external Kubernetes clusters into Azure Arc, it's kind of nice to know where they are. Right, so you might add like city you might add a data center or you might add that, hey, this has come in from AWS or whatever cloud, right? So there's a lot of things you can do with tags and you can add those things right away when you onboard these clusters. So let's go to the next slide and I think we're in the demo, we are. So I am going to attempt to share my screen here and let's see here. Hopefully the network collaborates a bit more than it did in rehearsal. So cool, so it's giving me options for the correct screens. So I think I'll see that when it pops up here and then we'll get going with the demo. There we go. Hey, look at that and it's showing the right thing. Awesome, I love it when technology works and I love it when bandwidth is good. I feel like that should be on a t-shirt or something. Okay, so let's actually jump over into Azure. So we're in Azure, we're in one of my subscriptions and we are looking at Azure Arc. We don't have any Kubernetes clusters in there, right? So it's completely blank, but this is where they're going to land. So I'm just going to partly walk through the process here and then I'm actually going to jump over to Google Cloud Platform. And I have a script that I got from the Azure Arc Jumpstart. Really great resource, we'll talk about that. And that script has the script, it has part of the code that you will see in here, but it has some other things too and I'll point those out, so let's go. So you could come into the Azure portal, you could click on add Kubernetes cluster and then you would click on, well, you need to read the information here, just kind of make sure you're having all the prereqs done and taken care of and then you would go to your cluster details, right? So you'd pick the right subscription, you could create a resource group here or you could pick an existing one, if that's where your cluster is going to land and then you just give it a name. Let's just pick a generic resource group just so we can move to the next one. If your cluster is behind a proxy server, then you click next, it would have our fields that will populate and you would need to know this information, right? And so that is an option, if you are behind a proxy server, you could go ahead and still onboard to Azure Arc. Now with Azure Arc, you get a bunch of, we give you some of these general tags that a lot of customers use. So things like data center, city that it's located in, et cetera, you also can add your own tags here, right? So you could add those right here and then you get to this next screen here and this is where you get the actual syntax for the script. Now this is a .sh script. So you would take that and run that on your Kubernetes cluster. Now let me point some things out here. There's a great document here. So you could go view this and it'll point you to a quick start on how to connect to Kubernetes clusters or connect them up to Azure Arc. You need to log into the subscription and we'll talk a little bit more about that when I get into the other script. But if you ran this as is, you would log in using your credentials. What I'm gonna show you is how to use a service principle to log in and attach your Kubernetes cluster. You would set, I'll go ahead. I know on mine when I did the demo, I used my own credentials to just on board. But when you're doing it through a service principle, you can, like my credentials have contributor access to my subscription. When you're doing a service principle, you could actually use our back to only give it the rights that it needs to do the job that it needs. So even if that service principle gets compromised in whatever reason, then it can't do anything like it can't, let's say delete stuff or access stuff that's outside of the purview of the connected Kubernetes area. Yes, definitely. Good points. And then you have the actual syntax for your connected K8 extension here. And this is what's actually connecting to your Kubernetes. This is what's actually connecting your Kubernetes cluster to Azure Arc. So we're specifying the name, the resource group, the location just like I talked about in the previous slide before the demo. Now the connected K8s or AZ connected K8s, there is a document out there on that on Microsoft docs and you can go look at all the options that are available. So if you wanted to do scripting, you have other needs that go beyond what's here or what I'm gonna show you, totally available. So then you could download this or copy it into a file. The other thing I wanna point out, you can get the PowerShell code too if you just switch between Bash and PowerShell. So it gives you the option to download a .ps1. These days I'm working all in Bash, so that's what we're gonna be using today. This down here is just reminding you, you gotta run this on your external Kubernetes cluster and then make sure you have connectivity and make sure you have Helm and you can check the namespace. So if we click Next, this gives us a verification. I'm not actually gonna use this, but if I was using this, what would happen is after I onboarded my Kubernetes cluster, this would update because it would see that Kubernetes cluster in Azure, okay? So I'm gonna close this here and I'm gonna show you one more thing and then we're gonna jump into our Google Cloud. There is an option here under Azure Arc Service Principles and you could actually come in here and add your service principle. It makes it really, really easy and this is awesome that this was added. So you would add the name and you would set the scope. So do you wanna scope it to a resource group? Let's say where you're gonna put your Kubernetes clusters or at the subscription level and then when does this expire? You can add a description and then your roles. So notice we have Azure Connected Machine onboarding. So if we're doing this for, if we're gonna use this SP for servers, we could give it that role. If we're gonna do Kubernetes clusters only, we would specify that role. If we're gonna do both, we would check both boxes. And then what happens is when you click next, it creates the service principle in your Azure AD and it's gonna give you the details on this copy details screen. It's the only time it will show you the secret. So make sure you copy it, put it somewhere safe. If you lose that, you're up a creek. Yes. So let's jump over to my other browser here. So I'm in the Google Cloud Platform and I have that script from Azure Arc Jumpstart, okay? And I think the URL for that is azurearcjumpstart.io. It's maintained by Leor. He's with Microsoft and his team does an awesome job putting Arc resources out there for folks that wanna get started with the service. Highly recommend you check it out. You can find this script too on their GitHub and so you could do what I'm doing. And so in here, you could see several things. So I'm setting variables at the beginning, like what my principle ID is, what the secret is. And by the way, we've limited the roles. So we've done that intentionally to make sure that that service principle can't do anything else in our subscription. Just create objects in Azure Arc, that's it. The other thing is, there was a question that came in. Can Azure Arc manage OpenShift? Absolutely yes. So you can actually use Azure Arc to manage OpenShift Kubernetes clusters. You can also use it to manage Kubernetes clusters from like Ranchers K3s. And so yeah, you can extend it to other Kubernetes clusters. Let's see here, so back to the demo. So I'm specifying location, I'm specifying a resource group here, and I'm actually gonna create the resource group as a part of this script. I'm specifying my cluster name, I'm installing Helm here, and then here I'm installing Azure command line interface and the Azure Arc extensions that are needed. And so you can see the CLI code is here. You could see the Azure Arc extensions code is here. And then I am logging in from this Cloud Shell. Let me actually just run this now because it does take a little bit and then I'll continue walking you through the code. And I should still be connected to my Kubernetes cluster here. So this is gonna log me into Azure using that service principle that I've created. And I did that ahead of this event, right? So I already made sure that stuff was there. It's using the secret, it's pointing at the right tenant so it knows where we're going. And then it's creating that resource group for me because we need somewhere to land the Kubernetes cluster. And then down here is where we're running our connected K8s extension. So again, we're pulling from my variables that I said in the beginning of the script and it's just pulling all those things in. And then I'm setting a tag. So what project does this belong to? And it's an Arc GKE demo. Now I added this to the script. This is just some syntax to run it. And also when this is done you could run this coop control syntax here just to verify that the Azure Arc pods which are the agents are actually running and are healthy on your cluster. So this is running through and let's pop back and forth. Actually it's logging in, it just logged in. It just created the resource group. So we'll go look for that resource group. And now it's actually connecting to Azure Arc. Now before we go over there, so just to show you this is our Kubernetes cluster running on GKE over in Google Cloud Platform. And that's what we're adding to Azure Arc. So let's go back over here and it takes a little while. So be patient, this would probably be a good time for you to go- Get a cup of coffee. Yeah, grab something to drink, come back 10 minutes later. Just go take a break, right? It's good to get a break. So you can see that it already added that cluster object into my Azure Arc. So this wasn't here before, right? You could see that it created that resource group. And so if we click here we're probably not gonna see the properties yet. It's still running, so it takes a little while. Once this was done, it will actually populate where the Kubernetes cluster is, what platform is it's running on. It's actually really, really cool. And it should finish here pretty quick. Other things I wanna show you here, let's actually click into this. So it's important to note that if you've worked with AKS in the portal, you'll notice you've seen this that there are Kubernetes resources, right? In the portal, really amazing because you can go look at your pods and look at your workloads, look at your services, your ingress, kind of see if the networking is working, your storage. And so you can do that with projected Kubernetes clusters as well. Now you have to add the Kubernetes cluster here. And then after you add it, you have to give Azure permissions on that projected Kubernetes cluster so that it will project those resources into Azure Arc. But it's pretty awesome. We're not gonna show that today because we don't have time here for that. But the other thing I wanna call out is you have these settings down here for your Azure Arc. Your Azure Arc projected Kubernetes clusters, GitOps. We're gonna show this later. This is where you would actually add a GitOps configuration to deploy an application to your Kubernetes cluster. You have open service mesh, which is a service mesh from Microsoft that's built and maintained by us that you can run in your Kubernetes cluster. Here are your policies, right? You can add other extensions. And then here's your monitoring. Now, keep in mind with monitoring and I'm not sure, I don't remember if we'll have time to show that as a demo here today, but you also have to go through an onboarding process for the monitoring. Yeah, we'll cover that. And you can actually, there's a script out there on the Azure Arc Jumpstart to simplify that onboarding. So now you can see that our Kubernetes version was populated, right? So this came in, let's check our script. It's probably done or it's getting close. You could see that the version popped in that wasn't there before. And if we come back over here, we'll see if this is populated now. And it is, so it just takes a little bit, but it comes in. So the really cool thing is that you could see the distribution. So let's say if I added a Kubernetes cluster from AWS, it would show that right here. If I added this from on-prem or somewhere else, it's gonna show something different. The infrastructure is GCP. So I picked that up and it shows you the agent of Arc that's running and our Kubernetes version. So this helps just to kind of keep track of, where our clusters are running, as they're running in different environments. So that brings me to the end of this demo. Just wanted to show you the process, show you the script there, and then also show you what this looks like when it's connected to Azure Arc. So let's jump back to the slides here. And we have about 22 minutes to keep going. So let's jump right into the next section, which is now that we've onboarded that Kubernetes cluster, and now that it's now a Arc-enabled Kubernetes cluster, because we love to add more complexity to the names. What do you do with it? So in our case, I mentioned before that I loved Azure policy as an ops person in order to be able to define enforcement and compliance of either built-in or custom policies. So we have a bunch of them that are built in, but you can very easily build your own policies. And those policies are maintained actually. Let me see if I can get to the next slide. Yeah, so they're checked, wait a minute. Yes, okay. So it's a implementation of Gatekeeper. And part of the OPA, which is the open, I forget the, I know the acronym, but I forget the name, actually. I forget that all the time as well, but. Yeah, so it uses on your Kubernetes as a implementation of Gatekeeper, which is basically a admission controller, if you will. So anything that is passed from your control plane to Kubernetes in terms of creating or changing, creating resources or changing configuration or doing anything like that gets caught by the Gatekeeper and then it gets analyzed and then it gets checked against that policy that it downloaded. And then whether or not, then it either allows or defines it. Thank you, open policy agent, Gatekeeper. Thank you, OPA, where it checks that and then either allows or denies it and then it logs it. Remember when I saw, we were talking about auditing and what happened, I don't know, didn't touch it. We'll wait on, we know you touched it then. So policies for Kubernetes is very, very important. And I'm going quickly here because we're a little behind schedule and I wanna make sure that we cover everything. So how do you actually implement Azure policies for Kubernetes? And of course, everything we're gonna show you today, I'm gonna do it in the portal, but you can absolutely do this in Azure CLI, in PowerShell or whatever is your preferred management method, command line or graphical. So in order to implement this, you need a few things. You need to have your Kubernetes clusters connected, which Steven just walked us through. So we have the machine that is, like in my case, because I'm running on prem, my machine is connected to Azure Arc and my Kubernetes cluster that's sitting on that machine is also connected to Azure Arc. In Steve's case, for his demo, that Kubernetes clusters running in GCP is now connected to our Azure Arc. So you need to have that, that. You have to understand the policy language for Kubernetes and I'm not gonna deal into that because we can spend the next six hours going through how to define policies and the JSON files that what is allowed, not allowed and what the remediation process because policies can have, they could be an auditing mode where they will just tell you that you're out of policy and they can be in, what's the word I'm looking for? I'm having a brain fart here. In corrective mode, if you will. Is it remediation? Remediation, thank you. Remediation mode where if it finds that it's out of property, we'll actually kick it back to property. If you're applying a policy for the first time, I would highly suggest you don't put it in remediation mode. If it's the first time, like run your policy, see what's not compliant, look to see whether or not there's a reason it's not compliant, maybe it's by design or on purpose. Figure that out before you turn it on because you might end up breaking something the first time. Once you've done everything, then you can put it into remediation mode so that if somebody changes that policy and not policy, that configuration further down, then it'll be kicked back into policy. Just notes from the field, if you will. Anyway, and then you have to wait for validation. There are a number of built-in and this little table here is just like a smidgen of the policies that we have there. So you not allow privileged containers in Kubernetes, enforce HTTPS ingress in Kubernetes, enforce internal load balancer, all these types of policies that you can define, those are actually defined for you that what we call the built-in. And it's just now that they're built-in, the only thing you need to do is actually apply that policy to our Kubernetes. So let's go right into my environment. So actually I do have a video on how we onboard that, no, that's for monitoring, that's for later. So I've already created, go to my resource group demo one and I've onboarded my Kubernetes cluster and I've done it the same way that Steve did his. So let's just go apply my Kubernetes cluster. So I have my home server here. I'm going to just click on it. I get all of the information, as I mentioned, the distribution because I'm running this on a Windows server, so a mini-cube infrastructure generic because it's just a box sitting under my desk right now. It's literally sitting under my desk. So when we say that you can manage machines, army? Don't kick it. No, no, I made sure my feet are far away from it. That's when we say that you can manage a Kubernetes cluster anywhere, regardless where it may be. Well, that's behind a residential DSL connection to the internet and I can still manage it. We've got the Kubernetes resource which I have not configured yet because I didn't put the admin token, the service account token in there, but let's go straight to the policies. With the, I wanted to mention that because the learn module is in process of being updated. So because since it was written, they've updated the agent. So when you deploy connected K8s, it actually deploys the policy agent at the same time now. So the learn module actually takes you through the steps to deploy it manually. You don't no longer have to do this. Once you've connected it using the connected K8s agent or process, it actually puts that agent or the policy agent in for you. So in my case, it's already enabled. I don't have, I'm not going to disable it. Now I could go to Azure policy itself. So Azure policy for my environment, for all the definition types which could be the initiatives or policies, whether or not all compliance states for my environment. And I can see that I've got some that are out of compliance, but let's assign a new policy to my environment. So my scope of course is demo one that's where my environment is. Exclusion, I'm not going to exclude anything, but if you had development Kubernetes cluster, you may want to exclude them from the policy, not necessarily in an ops fashion, not necessarily the greatest thing to do because then you can't rely to, it worked on my machine. So develop in the same environment that you would be in production. That's a good idea. And then we just go policy definition. We find, I'm just going to search on Kubernetes and then we have a very large number of built-in policies that you can use. So for sake of arguments, I'm just going to say, you picked the first one, which is Kubernetes service private cluster should be enabled. All right, let's select that. Put a description, that's normally a good idea because next you're going to, a month from now you're going to look at your policies and say, why did we apply this one again? Maybe there's a compliance, maybe it's a legal compliance, maybe it's a regulatory compliance that you need to have those for. Make sure to put that information in there. Yeah, and if I can add something here. So we have something called, we have some documentation in a framework called the AKS baseline. And the AKS baseline goes through some of those best practices like you should deploy private Kubernetes clusters when you're deploying them in production. So you can go through the baseline and you can actually look at the Azure policies for Kubernetes and the policies can help you meet the baseline and make sure you have a secure Kubernetes clusters that adhere to best practices. Yeah. So now I'm looking at my, I've just hit the while you were talking Steve because I agree with you. There's lots of good information on best practices for that just to go faster. I just hit the create button and it created the assignment. So it took the policy that we had defined built in. It took the scope that we gave and said, okay, so anything in that scope that is Kubernetes apply this particular policy. And right out of the start, I can see that the policy that we just applied, which is a private cluster is not started, meaning it's been added to my scope, but it hasn't been yet evaluated on any of the machines that are sitting in my environment. This is the one that I gave yesterday that as a test, because I wanted to make sure that we could get valid information to show you is Kubernetes clusters should be accessible only over HTTPS. So I deployed that policy. It started it, it evaluated. So the agent that's running in my Azure dash arc name space on my cluster itself in the OPA gatekeeper process, picked up that policy, checked the configuration of my clusters and sent back my status. And on this one, you can see that home server is actually compliant. You could have details, which is basically gonna tell you that, yep, it's configured properly. But if I had 50 different machines, it would give you your resource by compliance if you were 50%, 60% and you could then act on those 50% or 10% or whatever it may be field where you have different capabilities, not capabilities, but different results from your compliance. So lesson learned here, if you want 100% compliance, only have one cluster. I'm in the demo environment machine and I only have so many PCs under my desk. All right, so we have deployed and assigned a policy to our Kubernetes cluster. And as you can see, once it's onboarded, once we've done the steps that Steve walked us through to onboard that machine, now at that point, it's no longer about Kubernetes, it's about which service we can apply. So let's look at the knowledge check. And of course, you can vote and follow along. So which Kubernetes components is used to implement Azure policies for Kubernetes agent? I mentioned that it was GigKeeper but GigKeeper is what? Do you have any guesses? A. No, it's not a service, it's an admission controller. So as I mentioned, it's looking at all of the requests for a configuration change or deployment. And as an administer admission controller decides whether or not it will allow it to go in or not. Next one, which is the following type of Kubernetes that lacks support by Azure policy? Hmm, let me go with D. Yes, that's correct. OpenShift is the one that right now at this point lacks the support of Azure policy. We are building that muscle, that's a very Microsoft thing today to say we're building the muscle, where we're going to support more and more distribution or platforms and OpenShift is on the roadmap but we don't have any information as to when that's gonna happen. Now that we've done, now we've got plier policy is Steve, quickly you're gonna run us through the GitOps. Yeah, so GitOps is a huge topic, we'll run through this really quickly. Recommend you go and do some reading on this after this in the learn module and out. So GitOps is an operating model pattern. It's for cloud native applications and Kubernetes clusters. Basically stores your application and declarative infrastructure code in Git and that becomes the source of truth. Let's go to the next slide. And so GitOps has some principles that need to be followed and these are those principles pretty straightforward. There's a central organization behind GitOps that decides on these and updates these. So any GitOps tool or any service, whether it's Azure or other non-Azure services, if they're saying they do GitOps, look for these principles. So let's move to the next slide here. So some of the benefits of Azure Arc leveraging GitOps is being able to implement your configurations. So basically being able to implement GitOps on your Kubernetes clusters through Azure Arc. And let's go to the next slide here. And so really it touches on two areas. So being able to apply your configuration of your Kubernetes clusters and being able to deploy apps for your Kubernetes clusters. So your configuration would be things like ingress controllers, making sure certain namespaces are there. If you had secrets that need to be on the Kubernetes clusters or config maps, things like that. Deployment of apps is just like it says, right? It's deploying your application to Kubernetes clusters and that's deploying pods and whether you're using customize or Helm charts, GitOps can handle those things. And you can bring that into Azure Arc or it's integrated into Azure Arc. So let's go to the next slide. So I wasn't trying to give this away on the previous slides. So there's actually many tools out there for GitOps and these are called GitOps operators, okay? The one that's used with Azure Arc as of right now is called Flux. And we're not gonna dive deeper into this, but you can use Argo CD, which is probably the number two with your Kubernetes clusters, even if you're using Azure Arc. So I don't want that misconception to be out there that if you're using Azure Arc enable Kubernetes, you can't use a different GitOps operator, but it does use Flux and Flux is built right into Azure Arc enable Kubernetes right in the Azure portal. Let's go to the next slide and let's just build this out. I don't think we have much time to really go through this, but Azure Arc K8 leverages Flux as it's GitOps operator. It was built by Weaveworks. Basically in the portal, you put in your configuration, you point it to your Git repository and you click create and Azure Arc will deploy the Flux pods on your Kubernetes cluster and other pods that are needed to facilitate pulling from your Git repository and deploying the application on your Kubernetes cluster. Let's go to the next slide here. I'm moving through it quickly because we're almost out of time here. So we're actually, I already covered this. Let's go to the next slide here. We're at the end of this particular unit. So it's very quick to say that the way you can by using Elm charts and Flux operator, you can basically use any Git repo in order to automate the deployment of your workloads into a Arc enabled Kubernetes. If we want to jump into the right one and skip the knowledge check, I encourage you to do it at home and I'm going to go right into the next one because we're gonna get cut off here on Learn TV pretty soon. Azure Monitor on AKS containers is fairly straightforward. It installs an OMS agent on the same namespace as we've been talking about since then. And then it starts monitoring the health of your cluster and all of the pods. And it shows that information, surfaces up that information to all of your Azure portal. So it's the same way we deployed the same way we deployed policy. You just go to the portal, go to monitoring and enable the insights and then it deploys the agent and starts collecting that info. And I'm being told by Dwight that we need to wrap this up. So to learn more about, I hate this way it's doing it, this is not our best but to learn more, go back to aka.ms slash learn live 2022 0505 a dash summary. There's been a whole bunch of resources there that you can use. And thank you very much for joining us today. We took a little longer to get there but we got it done. So thank you very much and have a great day. Cheers.