 All right. Good morning, good afternoon, or good evening, depending on where you are in the world. Thanks for joining us today. I'm Vinicius Apollinario. I'm a Senior Cloud Advocate at Microsoft, covering a bunch of stuff including containers and Kubernetes, and I'm joined today with Thomas. Thomas, you wanted to do this yourself? Yeah, absolutely. First of all, it's obviously a pleasure to be here again with you to talk about the Azure Kubernetes Service on Azure Stack. My name is Thomas Maurer. I'm a Senior Program Manager for Azure Hybrid, talking about all end-to-end scenario which comes to Azure Arc, Azure Stack, and AKS, and Azure Stack, and we should also probably say Windows Server. Looking forward to the last one, by the way, of this Azure Hybrid Cloud Study Hall series. Really excited to close now our series on Hybrid Modules and learning on Hybrid Cloud on Azure. Yeah. I'm glad I was able to join you for the last one. I feel like we've did a lot of these joint sessions where we talk about containers and Kubernetes, so we're just here again to talk about this topic that we like so much, right? With that, let's jump into the content itself. If you want to follow along, just get the URL that you see in your screen. We have a QR code if you prefer to do it with your phone, so you can just snap and follow the QR code there. Let me move the slides. Again, you can follow along. I'm going to show part demo, part the module itself. We have some knowledge checks, so prepare for interactive session. With that, let's actually dive into the learning objectives of this module, so what you're going to take from the module itself. Of course, we're going to describe AKS, or Azure Kubernetes Service and Kubernetes itself, so if you're not familiar with that, we're going to cover that. We're going to explain Azure Kubernetes Service on Azure Stack HCI, which is our on-prem offer for AKS. AKS and Kubernetes cluster, so we're going to dive into some of the details about Kubernetes and its cluster, what is a management cluster, a target cluster for running your application, connect to Azure Arc, so how can you use the Azure control plane to manage Kubernetes clusters that are not running on Azure, but then you can leverage other Azure services for managing those Kubernetes clusters. Manage pod placement on multiple Kubernetes clusters, so did you know you can run Linux and Windows applications on a Kubernetes side-by-side? Yes, you can. We're going to cover that. Manage pod storage on a Kubernetes cluster, so containers are ephemeral. We should not be storing any data inside of a container. Did you know that? I'm going to cover that and show you how to do that. Implement containerized Windows workloads, so Windows are very specific in terms of how they are placed in Kubernetes, and specifically for active directory integration, there are some things you should know before you deploy containerized workloads for Windows, we're going to cover that as well, and to last but not least, how to troubleshoot your AKS on Azure Stack HCI environment. With that, let's dive into the introduction of the module. We have definitely a ton of stuff to cover, and I think, by the way, I also need to just want to point that out. When it comes to Windows containers, we have definitely one of the right people on this stream to talk about this. Something else I also want to mention, by the way, so you can obviously follow us with the module and also join and then go through the module, but also feel free to ask your questions in the chat. If you are watching on different locations, feel free to get your question in. We will try to follow them along. I promise we can probably not answer all of them because usually there's a lot of questions coming in, but we try our best. If you have any questions, also feel free to talk about these as well. For example, we already have a question there asking if we actually need to provision Azure Arc, and you donate to the first management cluster. We're going to cover that. It has to be connected to Azure, but the second one where your applications are running, doesn't need to be connected to Azure. We're going to cover all of that, but yes, we are taking your questions in already. Let's move to the module. Here you go. Again, this is the Microsoft Learn module for AKS on Azure Stack HCI. This is the very first unit, which is the introduction and throughout the module, we talk about a scenario specific for Contoso Corporation. Contoso Corporation in this case is looking for ways to optimize density, workload placement, and a bunch of other stuff. We're not going to cover those things specifically for the Contoso scenario in this presentation itself, but it's all part of the module as you read the module throughout. The most important thing that I want to highlight here are the learning objectives that I already described, but then also the prerequisites that we have for this module. It's important for you to have an understanding of Windows Server, understand how virtualization works, because everything that runs on AKS on Azure Stack HCI runs as a VM. The architectural, core capabilities, and primary use cases of Azure Stack HCI. So remember that these are two separate products that we brought together to deliver Kubernetes on-premises for you. AKS is one product, Azure Stack HCI is another product, and we brought them together. Azure Stack HCI has a bunch of documentation on its own, available on Microsoft Docs, as well as Microsoft Learns. We encourage you to take a look at Azure Stack HCI before you dive into AKS on Azure Stack HCI. And of course, the concept of counterinitization and container orchestration. We're not going to cover that exactly in the module in too many details. So just to level set and make sure that everybody understands the main issue that containers solve for you is the fact that every time that you put in a container image goes along with the container image and the container, whatever they run. So it solves that old problem of applications running in the development environment and not working properly in the production environment. Now, because of the way of the architecture of containers, they start faster, they consume less resources, and there are a bunch of other benefits. But then because of those things, if you schedule multiple containers at the same time, you need the way to provide high availability, control the access to load balancing, and many other stuff. That's where container orchestration comes in and that's where Kubernetes comes in. So this is all just level setting. I'm going to dive into the next module and I'm going to cover, which covers actually what is Kubernetes and Azure Stack, I'm sorry, AKS, which is Azure Kubernetes Service. We do a great job when it comes to like acronyms, right? We have tons of them and sometimes I think we are probably paid by creating acronyms. I have a suspicion that that's true. Yeah, so apologize upfront for going back and forth into these acronyms as we try to speak them and try to remember what they all are. Anyway, so let's go back to what is Kubernetes? So Kubernetes is, as the description explains here, an extensible Linux-based open source platform for orchestrating containerized workloads. What does that mean? It means that Kubernetes solve the problem of having applications that rely on multiple containers to operate. So if you have an application composed of microservices that will run front-end, will run your back-end and then connect to your database, you can actually run a database on containers, very specific, not recommended for all cases, but you could do that. So how do you operate multiple containers as one single instance? So that's where Kubernetes comes in. It takes care of the nodes running your containers. It takes care about your containers itself. It takes care about load balancing, high availability, scheduling of pods. Pods is a concept inside of Kubernetes that basically describes how a container works because in Kubernetes, you don't wanna talk to the container itself. So as a layer of abstraction for the container, we call it pods. So pod represent a container, basically in Kubernetes terms. And it basically says when you schedule a pod, that container should be running. Now, in Kubernetes, you don't schedule a pod directly, you schedule a deployment. And the reason why you do that is because you also wanna create another level of abstraction to your pod. The reason is if you wanna replicate your pod and have two, three multiple instances of your pod, you actually describe that in a blueprint called a YAML file that basically describes a deployment. And a deployment then goes and says, this is the container image to use, this is how many replicas I want, this is how much memory, CPU I wanna provide to that pod. And then with that blueprint, you apply the deployment, the deployment applies the pod, and then the pod brings up a new container running in your container host, which in Kubernetes is called a Kubernetes node, right? What else? So there are other components of Kubernetes, like for example, a service. A service in Kubernetes is basically a way for you to abstract the IP address and the networking of a container. So we're not talking to a container directly, we're talking to a service, so level of abstraction again, that basically says this is the IP address and I can put for example, a load balancer. And I have one IP address as the entry point and then I can dive into the containers behind this service that are part of my deployment. And if a pod goes down, I might have other replicas or Kubernetes will just spin up a new pod for me and then I don't need to care about the IP address of the pod itself. I care about the IP address of the service. So there are multiple components throughout Kubernetes that you basically describe in this YAML format and they are all levels of abstraction for you to interact with the service, with the pod and so on and so on. You have config maps for storing, strings for connecting to your SQL database or your database itself, secrets for storing things that are secret. For example, usernames and passwords, certificates and so on. So there are multiple components in Kubernetes. One that is also important for you to understand as we talk about Kubernetes is namespace. A namespace in Kubernetes is a way for you to say, I have this application and I wanna limit who can access the application or the configuration of this application, not access the application itself, but limit who can access the configuration of that application. So if I have a dev team and an operations team that is responsible for that application but they are not responsible for application B, for example, you can put all the configuration for the application A on namespace A and then the configuration for the application B on namespace B, right? Everything else Kubernetes is gonna take care for me. For example, if a pod needs to be scheduled in a Windows or Linux node, Kubernetes is gonna take care of that for me. If what is the best node to put that pod, Kubernetes is gonna take care of that for me, looking at the configuration that you have specified or what better way to do the resource management of your cluster or the scaling of pods if that's the case. So there's a bunch of benefits of using Kubernetes and Kubernetes today is what we call the defect of orchestration for containers that we see out there in the industry. Thomas, anything you wanna add here? No, well, first of all, you did a fantastic job. Like this is one of the fantastic intro into Kubernetes to explain the different concepts. And again, what is important also, if you think about, like obviously, if you're listening about this, you already thought about containerization and you have a little of an understanding of the containers, but what we see a lot now, why people are using this is really when they start modernizing their applications, they start to use containers. And as Vinicius said, we actually need them to have some sort of an orchestrator to actually make sure that these containers are spin up correctly, that we get all these benefits we have with pods and namespaces and so on. Vinicius just told about. So it's really, we see a lot of like movement there of people actually using Kubernetes to actually modernize their infrastructure, right? Yeah, yeah, absolutely. And then, all right, so we get you the knowledge check of this module. So let me go back to the slides real quick. So we talked about the overview of Kubernetes and AKS. Oh, by the way, I don't think I actually talked about AKS itself, but you're gonna cover that in the next module. So let's go to the knowledge check. By the way, if you wanna vote to put your option here, you can go to aka.ms slash polls and you can vote and we can see people voting. We're gonna give a few seconds here for you to vote. Please don't cheat. You can actually do the module online and then vote later. So don't do that. So let's take a look at the question. So while you were evaluating the suitability of using Kubernetes for containerized workloads, you intend to deploy to Contoso as your stack HCI environment, you realize that you will need to restrict users who use the same Kubernetes cluster from creating, viewing, or using containerized resources. Which Kubernetes feature should you use to restrict users here? So we're gonna give you a second for you to vote. If only someone would have talked about this just a couple of minutes ago. Specifically talked about this. Yep. So make sure you check out. We give you some more time. Make sure you check out aka.ms slash polls. You can also use the QR code. There you can vote. So we can basically see and we can also tell like a little bit where it's going with that. So we gave you, or the module gives you actually three options. Like first of all, help like A, help charts, B, deployments, and C, namespaces. Yeah. So, okay, I think we gave enough time. So I'm gonna go ahead and show the answer here. The answer is of course, namespaces. We covered that during the explanation of the previous section. Namespaces gives you a way to restrict hook and access what's inside of your configuration for Kubernetes. So that will be the correct option here. Perfect. And we also see that there are a couple of like questions coming in, also a couple of votes on C. So everyone who voted, absolutely great. Thank you for that. And there's a question coming in. So obviously we talked a little bit about notes. We mentioned notes, I think in the concept of Kubernetes. And the question here is, do I need two notes to run two different websites? Yeah. So in terms of high availability, one of the things you wanna think is if your node, if your server, the server that can be either a physical machine or a VM runs your pod and that node goes down. So what happens to your application? So it's all the right consultant answer here is it depends. So do you need two nodes? No, you don't need in order to run a website. However, you probably wanna do that so your website is never out of reach for your users, right? So by having two nodes, one of the things you can do is you can have at least two replicas or two dedicated nodes. So if one node goes down, the other one can still accept requests from your users on the browser, right? Which just a best practice of running any kind of compute resource having more than one node available for your compute resource. Yeah, yeah, I think so. If that would be an exam, I would say the right answer is, no, you don't need two nodes because you don't need, you can run two websites. But as you said, in many cases, you probably want to scale them out and make sure that if one node goes down, it can always happen something to a node, right? That actually runs in the other one. By the way, I think that's a good question. So if I have some time during the demo, we can probably have a look how that could look like on Azure Stack HCI, like on Azure Kubernetes service on Azure Stack HCI and we can actually dive in and talk also a little bit about the high availability options we have there. Yep, absolutely. So with that said, you wanna take over and talk about the Azure Kubernetes service on Azure Stack HCI? Yes, absolutely. Move to the browser and show the page here. Yes, that's perfect. So the question really is why would we then need, like if we wanna run Kubernetes, what is actually Azure Kubernetes service and what is Azure Kubernetes service on Azure and on Azure Stack HCI? So the Azure Kubernetes service is really a managed Kubernetes environment where we take care in Azure to make sure that you can run that cluster and you don't need to deal with the installation of Kubernetes. You don't need to think about how do I manage the control plane? How do I do upgrades and stuff like that? That is all done for you as a service in Azure. Now, there are some stuff obviously you can still do. You're still in charge of it, right? You can still make sure that you upgrade when you want to upgrade and stuff like that, but you don't really need to know that procedure. So it becomes pretty simple to run a Kubernetes cluster when you use AKS, which again stands for Azure Kubernetes service. Now, customers told us obviously that this is great, right? So we can run this in Azure, but in many cases, there are customers who really wanna be in a hybrid environment. They really wanna look at, okay, they need to run their services or their applications on-premises and like for example, in their own data centers or they need to run it in their branch offices, retail stores, factories, you name it, right? There are reasons why you have that and the reasons could be data sovereignty challenges, network connectivity. So like if you have high latency or you don't have reliable connectivity or you just don't wanna rely, think about the factory where you're like 24 seven produce something, you don't wanna rely on the internet like for your factory to run on apps which are running somewhere away and you rely on the internet connectivity. You might wanna have something in that factory itself. So what we are offering with AKS on Azure Stack HCI and by the way, I need to tell you also that it's not just running on Azure Stack HCI which is our hyperconverged infrastructure solution. It also runs on Windows Server. So if you have Windows Server today, you can actually go out and install it. So you can either install it for evaluation purposes on a single Windows Server or on a Windows Server cluster as well and then basically have your Kubernetes cluster running on Windows Server and you're still able by the way then to run Linux containers and we will have a look at all of that looks like. It's still a normal Kubernetes cluster. And again, you get the benefits really of running now this managed service or this like this, this managed Kubernetes which we call AKS in Azure but you also can run it now on premises wherever you need it. And so there is some sort of architecture here and not some sort. There is an architecture, how this actually works. And when you go and we will have a look in the next page then in the next more unit how the installation process is. It's super easy to actually go out. You have a nice wizard to do that but what you're gonna deploy is first you're gonna deploy the AKS or the Azure Kubernetes service on Azure Stack HCI or Windows Server on your infrastructure and that will create a management cluster. This is then gonna be the service itself. This is gonna be where from where everything is going to happen in terms of management management you do. And then from there, you can then create your other Kubernetes clusters for your application. Again, you can have one, you can have two, you can have three you can have all sorts of different Kubernetes clusters. And there again, you can make use of different parts and namespaces in each of these clusters and add different worker nodes to it. For example, again, we've also have a look and Vinicius mentioned that we can have Linux and Windows worker nodes in it. So we can actually have Linux and Windows applications running there. So when we look at the benefits here I already mentioned a couple of them, right? But I think one of the things we need like to highlight is first of all, it's super easy to set up, right? And it's really like if the moment I saw this like I spend already a couple of time to set up Kubernetes clusters and stuff like that by hand or manual you get some sort of a feeling for it how challenging it is. Now with AKS and SDAQ HCI and Windows Server really we have this wizard and this management options for you to go through and do it really, really quickly to actually go out and deploy this. We make it really easy for our customers. That's one thing. And the second thing is also to manage these clusters it becomes easy as well, right? You can manage them with, even with a UI but you can also obviously use all the existing management tools such as Kube-Cuttle and so on so like real Kubernetes management as well. And so we give you a real set of tooling there and we even take care of updates, right? We ship monthly updates for our AKS service on SDAQ HCI and Windows Server and then this will just pop up in the management overview and then you can just apply this to your AKS running on SDAQ HCI. And then this will probably then give you even new features like a new versions of Kubernetes which you then can go out and you can actually then go and update your Kubernetes clusters. And it's super simple again to do a similar as you know the experience from running AKS in the cloud. Now there is more to it. So as you said, this also like one of the big benefits of running AKS on SDAQ HCI is you get obviously Linux worker nodes so you can run Linux containers but you also get Windows worker nodes, right? Means that you can also run Windows containers there as well. And so if you are modernizing your .NET applications and containerizing these and so on, you can do that as well. So this is a huge benefit. I see a lot of customers working with that and modernizing their applications. And then last but not least this is a hybrid service, right? And we got the question from someone already and we will talk about what that is if everyone needs to be Arc integrated. And the Azure Arc integration is really the connection from these Kubernetes clusters to Azure. So we can manage them directly from Azure and we will have a look at this a little bit later on but that's also a big benefit that you then get the single control plane to for example use Azure monitoring, you use GitOps with Flux and so on to manage your application time. So there's a ton of different stuff and different benefits. Vinicius, you want to add a couple of things there as well? Yeah, I think to me the main benefit of using AKSHCIs that I don't have to deal with the Kubernetes native tools for a bunch of stuff on the cluster management side, right? So of course when it comes to deploying your application when it comes to configuring how your application is going to work that's something you do, Thomas mentioned, Kube-CTL or Kube-Cuttle or Kube-Control whatever you want to call it which is the CLI tool for managing your Kubernetes workloads and the configuration of the cluster entirely. However, with AKSHCI we have pre-baked some of the configuration from Kube-Cuttle or Kube-CTL and all of the necessary steps let's say for doing an action whatever you have to run five different commands on Kube-CTL. With AKSHCI, we actually embed those into one single command because it's easier for us to actually give you that configuration with a single native PowerShell command from AKSHCI, right? So to me that's one of the main benefits. The agility that I gain for managing my cluster because it's now a managed service even though I'm running everything on premises and meeting all of my requirements, right? So to me, that's the main thing but we do have some questions coming here. So for example, so for the control plane would be visible and falls in customers' responsibility and let me go back to the architecture to explain this. You wanna take this one, Thomas or I can- Go ahead, go ahead, since you're already on it. Yeah, so Thomas talked about the architecture here. So you have a management cluster and then you have a target cluster which is where your applications are going to be deployed, right? So this is what makes our Kubernetes cluster runs. This is a Kubernetes deployment to run your Kubernetes application. In AKS, the cloud one, the Azure cloud, basically you don't see this and you'll see the control plane of your environment, right? But you don't get to see the management cluster. In AKS HCI, everything is visible and it's up to you for managing but you don't actually get you, for example, SSH into this node or you can't but you don't need to. You don't already pin to the Windows nodes. Everything is done through the API of managing your AKS HCI cluster which is the PowerShell commands that I mentioned before. So it's your responsibility. You have to deal with these VMs and resources and you have to troubleshoot if that's the case and we're gonna talk about that but it's also very easy to do because it's a managed service that you have on-premises. So yeah, it's your responsibility but easier than if you decide to do this by hand with the vanilla Kubernetes deployment. Let's take, by the way, since we're on it, there are tons of questions but let's take another one. I quickly wanna, before we go to the knowledge check, there is one from Andri, which is is Azure Stack HCI SDN integrated into AKS on Azure Stack HCI already? So for those who don't know what SDN means is software defined networking. And if we quickly show my screen here, I quickly pulled this up because the question has a perfect timing. Like in May, oh, we basically announced the preview of SDN integration in AKS HCI. So you get the software defined features. Again, this is currently in preview. So you can with the latest public preview, you can actually try this out. And this helps in a couple of different ways. For those who are not thinking already about this, it's really about not having physical network designs or VLANs and stuff like that. You can still need these underlying, but you can then create like virtual networks similar as you can do in Azure. So you get this consistent experience with the Azure Kubernetes service running in Azure, for example, as well. And you get some other benefits which are outlined here. For example, like the hybrid integration with the virtual gateway appliance, built-in security. And again, then also the availability to actually modernize your .NET applications here as well. So again, there's all outlight in that blog post. Again, it's currently in preview. We have just announced this end of May. I think it was during build timeframe. So definitely have a look at that and try it out. Cool. So let's switch to from my screen so we can do the knowledge check then. So the next question is, you're preparing for deployment of containerized workloads by using Kubernetes on your existing Azure Stack HCI cluster that you set up for control. You need to minimize the overhead associated with maintaining operating system images for Kubernetes cluster nodes. What should you do first? I'm gonna give it a second so you can vote. And by the way, why do I wait for you to vote? There's something I need to mention about this module overall being a managed service, AKS on Azure Stack HCI gets updates constantly and frequently. There's at least a monthly update, as I believe, for the images for security patches and everything so you can update your nodes. By the way, the upgrade process is also much easier with AKS on Azure Stack HCI because we give you the image. You just run with the flow for upgrading or updating your nodes and we build the back-end process for you. So with that said, of course this module will be outdated as soon as a new feature is delivered to you via those updates, right? And I don't know how long this module has but there are some things in here that have been changed or that are not necessary anymore or they changed completely, right? We're gonna cover those as we go through the module but just so you know, the module was made at a point in time and then AKS on Azure Stack HCI continued to receive updates so that's the reason some of the things are not exactly as the module describes. So Thomas, you wanna take on which one is the right one here? So we have A, deploy the AKS cluster, B, register Azure Stack HCI with Azure and C, onboard the Azure Stack HCI to Azure Arc. Now you mentioned that we already have, like we don't need to really do Azure Arc, we can also run without that. We can register with Azure. I mean, that's like also something we're not in all the cases. So I would personally go with A, deploy the AKS cluster. It's not like you knew the answer beforehand. It's not, of course not. But I have a look at, it was actually a tough one, right? I mean, there is, I didn't really like the question because it's trying to trick people into like, okay, what do I really need to do first? What is first, right? So we got a couple of people voting for A, a couple of people who are free and also a couple of people voting for C. So I don't feel bad. This was a trick question a little bit and I don't like these, but I didn't make it. And also Vinicius didn't make it, I believe. Otherwise it's all his fault, but now it's kind of like. It's kind of like, exactly. But no, really you can deploy a AKS cluster. Yeah. All right, so you wanna actually go over and talk about how to deploy an AKS cluster to Azure Stack HCI. Yep, making sense. So let's have a quick look on this next unit. So I already, I will show you some live demos, by the way. And also I know Vinicius too, where we already have set up the Azure Kubernetes service. Right, so we will now take this one and just go through the module, what you actually need to do to do that. And then we will show you how to create the Kubernetes cluster and how to manage it and so on. So it's actually fairly simple. Obviously we need to have some underlying infrastructure, meaning you need to have either Azure Stack HCI or you need to have Windows Server, again, available as well. We're not really mentioning that, but it's also there. And then you basically download the AKS Azure Stack HCI software, you go into the Windows. Well, you don't even, like you install Windows Admin Center, for example, you go into Windows Admin Center, you have a extension called add the Azure Kubernetes service. And this will then take, like do basically all the installation for you. So here in this module, we're still a little bit behind where the Azure Kubernetes service extension still needs a special way of being added. With the latest version, you can just really forget that step. That's really not something you need to do. It's already there if you download the latest version of Windows Admin Center. And this is what it will look like. So you can simply skip this step. And if you open Windows Admin Center and you Windows Admin Center installation, you already see Azure Kubernetes service here. Exactly. The only thing I would recommend is obviously also you will have a little button. It says they're installed. It will also tell you when there is an update available, right? So since we have updates to the Azure Kubernetes service, we add new features, there are regular updates also to that extension as well. So now what you need to go through is actually the wizard. So if you go to your Azure Stack HCI cluster and manage that in Windows Admin Center or you go to your Windows Server cluster, or again, even your Windows Server standalone server, again, if you wanna try it out, you can also do that. And then you have, it shows up on the left side, you see the Azure Kubernetes service extension. And if you haven't set up the Azure, the AKS on Azure Stack HCI thing already, this is what it will look like. It will tell you the prerequisites, what you need to have, how much storage space do you need, what else do you need to prepare? And you obviously need to register your Windows Admin Center with, for example, Azure to go through all of that as well. And then the first step, it will run some system checks, right? It will make sure that you have actually enough storage, it will make sure that the nodes are configured, if there are any updates left, if there are restarts pending, you get all that. And for some changes, you can then hit like, okay, there's something not configured the right way. You can just hit the Apply button there and then go through that as well. Next up, you do connectivity to actually, so the way Windows Admin Center works, it does remote PowerShell connections in the background and it runs PowerShell commands against the Azure Stack HCI nodes or the Windows Server nodes. And to do that, like remotely, we need to enable CRED SSP. So we can also do that by simply hitting the Enable button and that's it to allow this if you haven't set it up already. And then if you go further, now you're basically setting up your management cluster, that's what basically the AKS service is gonna be. You saw that on the architecture graphic, you're gonna saw that specific cluster. So you need to like give that a name and you need to provide a cluster shared volume for it. Like this is a shared volume, which is again, highly available. And this is where you're basically gonna download all the images where everything gets deployed on and it's gonna be used by the AKS service. You can also then do some additional network settings. You need to say basically which virtual switches should be used. You need to say which IP addresses are going to be used for the Azure Kubernetes service and so on. And then there is a step where you say, hey, I want to need to do the Azure registration. This is for billing reasons, right? We obviously charge you for using AKS. So you need to register it with a Azure subscription and at the end of it, you basically go and review all that, all the amount you entered and you hit apply and then it takes a couple of minutes to actually deploy AKS on Azure Stack HCI or Windows Server on-prem, right? So pretty cool thing, pretty straightforward to do. It's more about thinking which IP addresses do I use, what subnet do I use and all of that versus actually like knowing commands to install it. Now that said, all of that can also be done using PowerShell. So if you want to script the installation, you can do that too and everything is done in basically a similar way. Yeah, I would say for learning purposes go with the Windows admin center for the first time. You understand what's going on and what is the process and then PowerShell is just so much faster and you can automate stuff after that. Another comment is these screenshots, they are again the pointing time on which the module was made, a lot has changed, including some configurations. If you don't understand what a configuration is, just go to the documentation page, everything is documented there, explaining what to do. Majority of the issues that we see with customers, they are related to IP configuration and IP ranges for your network load balancing IP range versus your existing networking range, the Kubernetes configuration. So IP configuration tends to be the number one issue that we see customers facing when deploying Azure Stack, AKS and Azure Stack XCI. So I highly encourage you to take a look at the documentation and learn a little bit more. So let's go with the knowledge check or are there questions here, actually? Will you see any questions? There is just a comment, I see one, like a multiple comments on like the one terabyte CSP. Yes, so this is obviously, it's not gonna be fully used in that sense, but it's gonna be like where all the images are gonna be downloaded and then also like your worker nodes get created on that as well and so on. So we obviously want to make sure that you have enough space there as well. If you're, by the way, running on a single Windows server, like for those of you who are thinking now, how does that work there? I don't have a CSV. Like CSV, by the way, stands for cluster shared volume. Then you can just use a data disk. Like you can just have a volume, a data disk, like there to try it out. It's just using a drive letter for that. Yeah, there was also a question about using a Azure VM. I think it was little panda cub asked the question, can you use an Azure VM to go and test this? Yes, you can. And in fact, we have documented this steps to also do that. So if you go to the AKS on HCI documentation page or the how to section, the very first link there, there is an option for running on Azure and we have a deployment template that deploys an Azure VM and all the necessary components on Azure to go and make that work, nested virtualization, all the IP configuration for it to work, nested work on Azure. So you can deploy AKS on Azure Stack HCI, which is actually on a Windows server, on a single Azure VM. Well, nature keep in mind is a large VM, but you can. Yep, absolutely great way if you don't have hardware available and you wanna test it out. But again, as you said, it's a large VM. All right, so the question for the knowledge hack, you're documented the AKS installation procedure for Contoso's operational team that will be performing similar installations on many Azure Stack HCI clusters. You must list the prerequisites for installing the AKS extension for Windows Admin Center, which steps should you include in your list? And from what Thomas mentioned, I think you know what the answer is, but you know that this question is also outdated, right? So the first one is enabling CRAT SSP, which Thomas mentioned that's for a different day, right? Modify Windows Admin Center feed manager settings and register Windows Admin Center with Azure. So I'm gonna go ahead and show the right answer for the wrong question, which is modify the Windows Admin Center feed manager settings, right? So we said, so Thomas mentioned, you can skip this step today in the latest version of AKS on Azure Stack HCI because it's already there, it's already baked into Windows Admin Center. So every installation of Windows Admin Center already has the add-on that is needed here. So you don't need to change the feed and manually download the extension. But with that said, that was the requirement on the first versions of AKS on Azure Stack HCI. Awesome. So now we have AKS on Azure Stack HCI in Windows Server installed basically ready for us to use, but we don't have a Kubernetes cluster right now to actually deploy our application, right? We just installed the service itself. So Vinicius, can you tell us now, okay, from what do I do now to actually have created Kubernetes cluster? Yes, yes. So again, just to recap, the management server is okay. Now we're gonna get what we call as a target cluster, which is where your applications are gonna reside. And again, just like the previous module, there's a bunch of screenshots here. So what I'm gonna do is I'm gonna instead switch to my demo environment where I actually have exactly what I just mentioned. This is an Azure VM running as a Windows Server VM on Azure. And then I have everything pre-configured here. And you can see that I did the previous step, which is configuring the management cluster. So Azure Kubernetes service. Oh, by the way, you see the difference between the screenshot from the module to this one. This one shows more nicely on the left-hand side here. And it talks about the management cluster. Who is my host VM in this case, because remember it's AKS on Windows Server, but it will look exactly the same as if it was in Azure Stack HCI. The difference here is you'll notice at the bottom that I don't have any Kubernetes clusters available to deploy my application, right? So what is gonna happen here is I can go and click add cluster and then go through the process of adding my Kubernetes cluster that is actually going to host my applications. So it talks about the prerequisites. Again, 10 gigs space for the cluster, the volume for the images, at least 16 gig for the running VMs. And AKS must be set up already. So let's go to the basics. I'm gonna connect you Azure Arc. I can disable, by the way, this is what I mentioned before. You can disable for the target cluster. And if you wanna enable this later, you can do that. So I'm gonna click enable. I'm gonna go to my Azure subscription. I'm gonna look at the resource groups and then the region is East UI, so that's correct. The cluster details, so this is the management cluster. I mean, the host, the username, and of course I have to provide my password here. And we're gonna validate the credentials against the host. It's gonna find my management cluster. This is the thrill of running a live demo. Things can work or they might just be stuck in this stack. No, it just takes a moment until it can do everything. It will just be validating. Obviously, the more like stuff you have, the longer it will take, but you see, let's hear that. All right, so I'm giving it the new cluster name. I'm gonna call it Vinny AP cluster. It's validating the name. It's asking me what version of Kubernetes I wanna use and the latest one available on AKS HCI is the 22.6. I can go back all the way to 20.13. It depends on the features that you're expecting to see on Kubernetes and then if your application has any dependencies on those versions. So that's the reason we have multiple options here. And then the primary node pool. And the node pool, we're gonna talk about that later, but basically is your control plane for your cluster, right? So you have management clusters on one thing. This new cluster also needs a control plane and what's called in Kubernetes a master node. So I'm gonna give a load balancer VM a size, the control plane a size as well and how many instances I want. So if I have a large workload running, supporting millions or whatever is a number of customers to have concurrently running or accessing your application, you might wanna add more nodes to your control plane so you can take the load of your environment. Usually what you wanna do is you wanna have high availability for the control plane as well. So if something happens with that VM, you have another one but for the purpose of the demonstration, we're gonna keep as one. Here I have the node pools and you can see that I don't have any node pools to run the applications itself. So I'm gonna add a node pool and I'm gonna give it a name. I'm gonna call it WS pool for window server. I'm gonna say it's windows, the default node size. And this is basically saying almost that are created on this node pool so you can scale up or down. They are going to be created equally, right? So what is the standard size you wanna use for the VMs in this node pool and you can have a different node pool with different standards for your configuration. So I'm just gonna use a default which is a four gig and four CPU node count. I'm gonna use just one as well so I can create this quickly. The maximum pods per node depending on how much pods you wanna put in a node, if you have a very good strategy for your application, usually your developers can give you a hint on how many pods you can use but it's just a good practice then to make sure you're following whatever your developers are saying in terms of the performance of your application. We're gonna cover tents and tolerations later so I'm gonna add, I'm gonna go, so I need a Linux node pool as well by the way because I have configured arc. So I'm gonna create an LS pool, same thing. I'm gonna say I just want one node, I gonna add and now I can go to the authentication. The Linux node here is needed simply because the configuration of Azure Arc run as an agent which is in fact a pod inside of Kubernetes but that's the reason why you need a Linux pool for having this cluster also connected to Azure Arc. I'm gonna disable this but if I wanna use our back and make sure that my exit directory users can access this cluster, I can do that by enabling the authentication here. The networking configuration, multiple options. The default today is to use Calico. I think one of the things I need to do here is just click on this one. This is the configuration that came from the configuration of the management cluster so you can see I have a range of IP address that are going to be used for the load balancer. Everything's configured via DHCP. Security configuration, I'm gonna use Calico as well and I can configure the policy for the secured network policy later and that's pretty much it. I'm gonna review, I'm gonna click create and that's start the process of the loading images, configuring the cluster and everything else. So this takes a few minutes, we're gonna leave it here, maybe come back later but that's the process of how you create a new cluster using Windows Admin Center. Of course everything's available on PowerShell. Yeah, that's super again, super easy to do and to set up. I wanna quickly mention like two awesome things or something and then also go in one of the questions which I just saw. First of all, what we just released by the way, like usually the workers can be scaled horizontally. We also just released by the way a way to also do that vertically. So you can like change the size basically off your workers later on as well. That I think is in the since the latest release in May, I think we also, by the way, you just mentioned DHCP. I also wanna just clarify, you can also do a static setup. Like if you want to do that, like for those who just while looking at all, like I don't have something implemented with any HCP, you can also do a static setup. And then I have one question here from Andri. Deploying management cluster will not be charged, right? And so that's like a pricing question and I wanna quickly share my screen here and I actually wanna quickly talk about this as well. So let me quickly put my screen on. So we have a document describing pricing, obviously also can find it on the official pricing page and you can see here what we're actually billing. So how do we actually charge for it? And it's per vCPU for the worker nodes running for the workload clusters, meaning as you're absolutely right without the management cluster itself, you don't get charged only for your workload clusters like the ones Vinicius just created, you will be charged. There's a price per vCPU per day, you get basically charged for that. But again, again, you get a lot of different things here. And then there's some additional stuff depending on if you're gonna use Azure Arc which offers a couple of benefits, but we will talk about that in just a bit. The question always comes up, okay, what about hyper threading? And so if your hyper threading is enabled, what it happens is like, well, we split these actually in half. So you actually can still use hyper threading and no pay more because you use hyper threading, right? So I think that is a very good, again, resource. We have this page. If you wanna learn more about how pricing works, I highly recommend to check this out to understand more about it. Yeah, that's great. By the way, just for the sake of showing some behind the scenes stuff, if I wanna go back to my screen. So this is the same environment, right? So instead of looking at the Azure Kubernetes service, I'm looking at the VMs, the virtual machines running. And you can see that I have a cluster management control plane. I have a control plane for the ViniAP cluster and I have a load balancer for the ViniAP cluster. None of these VMs are being charged, right? The only VMs that are going to be charged are the worker nodes that Thomas mentioned. So the node pool with Linux with one node and the node pool with Windows with one worker node. So those are the only VMs that are going to be charged and it's by VCPUs on those VMs. So just to clarify further, showing the background of my deployment here. All right, knowledge check time. So let's take a look at the question here. So you're documenting the procedure of creating a Kubernetes cluster for Contoso's IT operation team that will perform similar installations of many Azure Stack HCI clusters. You need to describe this step to configure the size of the VMs used to host the control plane components of Kubernetes cluster. Which of the following is that step? So remember that we talked about the configuration and then in the configuration and we know that there are multiple steps. What step is this one for configuring the size of the control plane? We're going to give it a second so people can vote. Yeah, so you can imagine that this one is like, the step was I think pretty at the beginning just to help a little bit here, right? So it is obviously something you need to do very early on. It's one of the first things you need to decide. So again, feel free to go out and take the polls. So we have A, note polls, B, basic and C integration. So again, by saying that it was really much in the beginning, it's a pretty basic setting I would say. I see what you did there. Thank you. I, yeah, it took a moment. All right. Perfect. Let me see some people already answering and did answer the question right. So congratulations. So it's, let's talk about what I was saying is the very first step. So it's the basics one. So with that, let's move to how to connect your Kubernetes service, Azure Kubernetes service on Azure Stack HCI Azure Arc in Thomas, do you want to take that? Yes, absolutely. So I love that step obviously because this now gives you really the power of Azure now on-prem, right? We give you the Azure Kubernetes service in your data center. You can use that and run it and that's already a great benefit. But obviously there is more you can do. So first of all, what is Azure Arc? Azure Arc really allows us to extend the Azure control plane and Azure management capabilities as well as Azure services to basically any infrastructure. Meaning that you can connect things which are running on-premises or other cloud providers and connect these up and manage this through Azure Arc or deploy Azure services outside of Azure. So that really basic. Again, we had sessions explaining Azure Arc. Feel free to check out the whole study series to learn more about this. But what we now can do is we can use Azure Arc to connect our AKS on-Stack HCI cluster and bring that into the Azure control plane. And then we will see it in the Azure portal as well as if you would run with CLI's and PowerShell we can then do some cool stuff we can do with Azure. So we're gonna have a look at this directly. So if we go through the module again, I could now basically go through this but let's do a quick demo and let's quickly share my screen here to actually see what is going on. So here I'm still in Windows Admin Center, right? So what I did, I already connected it. I connected also, I created a Kubernetes cluster and you can see here, let me quickly go over here. You can see here that this cluster now is connected using Azure Arc, right? So I have a link directly going to the Azure portal if I click on this. So what I'm gonna do is I'm gonna open up the portal and here, I'm not using the link right now because I wanna show you also how Azure Arc in general works. So if you go to the Azure portal and you search for Azure Arc, you will be see like everything I just explained to you like adding existing infrastructure and managing it directly from Azure or deploying Azure services outside, right? And so on the left side, you can now see I have here certain things you can do. You can connect your Azure Stack HCI clusters to manage these and monitor these as well. You can connect servers, Linux and Windows SQL. You can even connect your VMware ReCenter systems. And now since yesterday, also system center virtual machine management servers. And then do VM life cycle management. The same thing as you can do for Azure Stack HCI, you can then like go into the Azure portal, create VMs, delete VMs, make them bigger, make them smaller, whatever you want to do. And so we are obviously gonna look at Kubernetes. And in the Kubernetes world, we cannot just connect AKS on Azure Stack HCI and Windows Server. We can also connect other Kubernetes clusters. So for example, if you run OpenShift or anything else, you can connect that as well, right? We have a list of validated Kubernetes distributions, which you can have a look, but there are a lot of them out there. So let's go to that cluster I just showed you. Now, this is the same cluster I deployed locally here on my Azure Stack HCI or Windows Server system. And you can see here, it looks like an Azure resource, right? It's connected with Arc. You can see here, it's part of a resource group. It's part of a subscription. What you can see here is some specific information here. You can see here that it actually runs on Azure Stack HCI. You can see the Kubernetes version I'm running. And on the left side, you can see here like the Azure typical things. Now with being an Azure resource, I get now the benefits of role-based access control, for example. So I can use Azure AD to make sure, hey, who can administer my Azure Stack HCI or my AKS cluster, right? So what we see is now you can do all them or a lot of the management through that as well. You also get the benefit for audits and activity log. And what I really like and what a lot of customers like is the integration of Microsoft Defender for cloud and for containers. So you can basically connect that and it will give you security recommendations and security alerts if there should be something happening. There's a lot more. What I wanna show you quickly, like some of the benefits, you can also enable, for example, logs. So what you can do here, simply run a query. So we like QP events, run that query, and then you can see here, you get those logs directly in the Azure portal without even connecting directly to the Kubernetes cluster. And then you can even do monitoring. So if I go to insights here, I get the power of Azure Monitor for containers. Similar as you get for AKS running in Azure, you get that also for Azure Arc-enabled Kubernetes. You can see here, if I look at the cluster, I can see the node utilization and so on, get all that information. But then, for example, you can also monitor directly your containers as well. So you can see here, I get even some warnings and stuff like that. So I really get some good information on what is going on here. Now, there is more. Again, there's a ton of stuff we need to talk to, but again, we also need to be cautious of time. I can also use, for example, Azure policies to make sure that I can basically my clusters, my Kubernetes clusters are configured in the right way. So that like, for example, I don't have any ports open. I don't want to be open and stuff like that. So if I'm charged in compliance, I can do that across my Azure environment. But now with Azure Arc, I can also do extend that to Kubernetes clusters running outside. We get a lot of extension stuff, which I, again, maybe we can talk a little bit later. But one thing I wanna show you is the GitOps integration or Flux integration. So if you look at this, what I did here is, for those who are not familiar with GitOps or Flux. So if I have an application, I store the application at the configuration of that application in a Git repo. This can be a public or private repo. In my case, it's a public one, but you can obviously, like if it's a private application, you wanna have it in private repo. And then you tell the cluster with that Flux configuration to basically go and get that application or that configuration from that Git repo, right? And then this will be deployed on the cluster. And then in my case, you can see here, I did say go every three seconds and check for something new, right? That's probably not what you wanna do in production. But again, like in terms of the demo, you will thank me that it's not two hours, otherwise we will be waiting a very, very long time. So let's have a look at the application. Again, this is the app I designed here and it says hello, Azure Arc. You can also see that I took all my web design skills I have and to build that. So it looks very nice. I also did nice choice with colors and so on. But let's look at the Git repo for a moment. So this is the Git repo where this application is stored. That was the one I was showing in the portal before. And then under releases on the prod, there is actually the YAML file, which is basically the configuration of that application, right? And we did a couple of things. So you can go out and do the replica count and stuff like that. But one thing we did is like, we took a variable here for the message and that was the part which says hello, Arc, right? So let's change that. And I'm gonna do now something which you please don't do at home and also don't do it at work. I'm gonna do a change and directly commit it to my main branch. So I'm gonna do hello, learn, live. I'm gonna change that. At least I'm going to do a commit message here. But again, you obviously would create that in a different branch and then merge that and go through approval steps and all the good stuff you get by using Git. But again, I'm going like full here and through a commit to my main branch. And so I applied this. And so what happens now that change, as I said, in that Git repo, the agent on that cluster will now go and look at the changes every three seconds. And if I now talk long enough and I go and do a refresh, so quickly refresh here. It's refreshing. Okay, I just got it in the moment where it was refreshing. So we can see here now it says learn life. And so I did that change. Now, what are the benefits of this? Obviously it's kind of like feels like an infrastructure kind of code thing. I get the advantage of this Git commit. So I can actually look and see what the changes were. I can work with different branches. I can add approval steps and all that good stuff. And I can also revert obviously to like backwards if I wanted to. The other thing is you might say, well, Thomas, I could also just, it would be way quicker if I directly deployed the application. And you could be right, right? If I sit next to the Kubernetes cluster, if I directly deploy that super fast, but think about the scenario where first of all, you have maybe hundreds, if not thousands of Kubernetes cluster running the same application or where for example, you're working like from home and you don't have access directly to the Kubernetes cluster. You can now do the changes and everything securely using your Git repo using Azure and so on without having a direct connection like a VPN or something to the Kubernetes cluster, to the AKS cluster running on premises in your local data center or at your edge location, your retail store and still do that change, right? So if you're again, we have customers running hundreds, if not even more of these AKS clusters in for example, their stores and factories. So if they wanna go out, they can really, really quickly deploy a new version of their application. So I think that is pretty cool of what you can do using Azure Arc. Again, that by the way also works with Azure Kubernetes service in Azure. We just deployed or just released Flux V2 as general available. So you can really make use of that in that case. So I think that there's a lot more obviously we could talk about and we quickly look at also in the questions. I think one of the questions that came up that was interesting is where does Arc reside? Is it a cloud? Is it HCI anywhere? So this is actually a good question. I really wanna highlight what just happened, right? Azure Arc really like the control plane, Azure Arc really connects this Kubernetes cluster to the Azure control plane. However, all the data of what you have on-prem all the look like your apps, your data is secure stuff you want to run on-prem, all that stays on-prem, right? Like all stays on the Kubernetes cluster itself. But some metadata obviously like names and stuff like that they go into the control plane and do that. But again, if you have data sovereignty challenges where data needs to reside in your location, for example, that data will not be synced up to Azure, right? So it's really connected through the Azure API. Again, and Azure Arc really runs like is a connection to the real Azure control plane. Yeah, it's an agent based installation reporting metadata back to the Azure control plane, right? Yes, yes, absolutely. And I quickly wanna like, if you wanna add a Kubernetes cluster we have a simple way here. And again, this also works if you have other Kubernetes clusters, you just can go through that wizard and it will help you to onboard that cluster. So you can see here what you need a newer existing Kubernetes cluster. You can see here the version it includes, for example, also OpenShift and others and you can learn what other Kubernetes distributions, for example, are working. You need to have like these outgoing ports open and then like have outgoing traffic through to the Azure APIs. And again, you can find all of that in the documentation. And then you can go through the onboarding steps now. There's one thing that is very important about AKSHCI on that page, if you can go back there real quickly, which is the set up your local machine. Part of that is not needed for AKSHCI because we actually have that command baked into the PowerShell commands of AKSHCI, right? Exactly. Yeah, go ahead. Yeah, and you're absolutely right. If you're by using AKS, you just need to basically enable the networking pieces, right? That's what you need to make sure that actually your AKS cluster can communicate with the Azure management APIs. Again, stated here, if you click on that outbound URL link, you will see that. But all of the other things, they're already done. Like our Azure Arc is really baked into AKS on Azure Stack HCI in Windows Server. Yeah. All right, in the interest of time, let's move to the knowledge check. So let's go to my screen. And the question now is, oops. Okay, so remember you can vote at the aks.ms slash both. To promote operational excellence in Contoso, you decide to script the process of connecting Kubernetes clusters on Azure Stack HCI to Azure Arc for Kubernetes. You've newly created subscription and want to ensure that onboarding will completely successfully. What should you do after you successfully authenticate and access the target subscription? So I have to apologize. I didn't, like I was so excited. I didn't cover that part, but the way Azure works is we have all these different resource providers, right? And so if you're onboarding a new service, you may basically need to register the resource providers if you haven't done that already in the subscription. So that is a one-time configuration, right? Exactly, exactly. Now, I'm not even 100% sure if like, if you go and run through the Windows Admin Center wizard, if that's not already done for you, by the way. No, it's not. I remember looking at the documentation. So the documentation gives you the commands that you have to run, because this is a subscription side configuration, not an AKS, HCI side. So yeah, you need to do that in your subscription. So you do this once. Every other cluster that you decide to enable Azure Arc, you don't need to do that again. So I think it's pretty obvious here. We gave it away. So register Azure resource providers. This is the first thing you need to do before you can successfully integrate Azure Arc, the AKS, HCI, and in fact, any Kubernetes cluster. All right, I'm gonna move to the next section, which is manage pods on multiple Kubernetes clusters. So let's take a look at the module itself. Let me open that. All right, so we mentioned already node pools. So let me briefly explain what node pools are. Node pools are a way for you to set what set of machines or VMs are going to be part of a single configuration, right? In AKS, that basically means that you are going to have either Windows or Linux machines as part of that node pool or that group of nodes on Kubernetes. And then you can apply different configurations for those node pools. You can even give access to other resources by node pool, by configuring managed identities, for example, for a specific node pool. So only nodes that are part of that node pool can access other resources that it gave access them to. And there's a bunch of other configurations that are part of node pools. But in SS, node pools are a way for you to standardize the configuration of specific nodes that are going to meet specific requirements. Usually we have node pools assigned for either Windows or Linux nodes. And then you specify node pools for specific applications because they have a specific size. And then you can have that application running on those nodes and scale up or down those nodes and so on. Now, what happens is from the application side, as I mentioned, you can have either Linux or a Windows node pools. But then from the Kubernetes perspective, Kubernetes will try to schedule pods on the best node possible for your application but it doesn't necessarily take in consideration that a Windows application is running on that container, right? So one of the things you wanna do is specifically for Windows and Linux, you wanna specify what are the nodes that you want this application to run. So for example, one of the things you can do, you can use node selectors. And node selectors are part of your YAML file, that configuration file I mentioned that is a blueprint for what should be configured. And then you have here, for example, an example for a pod and that pod, like I said, usually you're gonna have a deployment but doesn't change the fact that down here, you have a node selector and then you select, for example, the Kubernetes.io slash OS, which is a metadata of the type of nodes that you have in your environment should be Linux or should be Windows. You can change that configuration in your specification for your deployment or for your pod. So this is one way you have, you can either say that the metadata for the OS on the node should say Windows or you should say Linux. So that's one option. You can actually use the node selector for other stuff as well, there's a bunch of options on the metadata on a node in Kubernetes and you can choose other things to say, I want my application or my pod to run only on nodes that have this specific metadata, right? Another option you have is tents and tolerations. So basically tents and tolerations are used in conjunction and they basically say, and I apologize for this but I always get confused on which is what but basically tents and toleration provide an alternative for pod placement and let me just read this real quick. Tents are part of node configuration and tolerations are part of the pod specification. So I apologize, that's the thing that I always get confused and I need to read to remember which is which. So the tents are basically a part of the node configuration where you say this is what the node should be and this is how the pod configuration specification is. So here's an example. So I have a toleration here, remember this is the pod specification and I'm saying that the toleration here is for the operating system that equals Windows we are going to use the effect node schedule, right? Now, again, this is the pod configuration and you also have the node configuration which is the taint, right? So I'm gonna take this node and say this node will have a metadata called Windows associated with no schedule. So in this case, whenever I use this taint for I'm sorry, this toleration for the pod the pod will use the taint from the node to be scheduled. So in this case, we are placing Windows applications in this specific types of node. Now, one thing to keep in mind here and this note is very important. So that's the reason why I'm gonna read it is node selectors which I explained before they enforce placements of pods on a specific set of nodes, right? So you enforce that configuration. In no case, a pod is going to be scheduled in a different node selector that is not either Linux or Windows, right? Tints or sorry, tolerations allow pods to run on a designated set of tainted nodes but does not prevent replacement on nodes without tints. Does that make sense? So one is an enforcement and the other one is a preferred way for you to schedule a pod, right? I apologize. And again, there are many other metadata that you can use to go and place your application on a specific nodes or place pods closer to each other or on specific nodes. And there are multiple ways for you to run this. The way you're gonna use depends on your application. In most cases, you're... Oh, did we just lose Vinicius? Not 100% sure, but I guess there is some connection issue because it's pretty awkward how he actually stays there. So yes, we quickly lost Vinicius. So we will quickly go to the actual question here. And so we're gonna ask and I'm gonna quickly add the question up here on the stream. So when he comes back, we will have to talk a little bit more about it until then. So the question is Contoso is planning to deploy windows and Linux containerized workloads into AKS on Azure Stack HCI. You need to document the procedure that would ensure that windows-based pods are deployed to the Kubernetes cluster nodes running Windows. So you decide to use taints and tolerations for this purpose. So which cluster component should you apply taints? So I think Vinicius made it very, very clear. He explained like that, he went through this. So let's quickly go through the options. So you have A pods, B nodes, and C deployment. So I give some time for you to answer. I see already some of the answers coming in. Let's also check out, like if you check out aka.ms slash polls, you can see what the question is, but you can actually go and vote there as well. And I see people are actually selecting the right choice. So yes, with that, it's B. So for that, we have nodes where you're actually gonna apply this to. So let me quickly switch back to the management, our deep learn module itself. And the next unit really is about managing pods and storage on Kubernetes clusters, right? So again, we have a lot more to cover. So I'm gonna really quick, but this is actually a way I wanna show you because by default, you're obviously gonna run a lot of state less containerized applications, but what if you have state full workloads where you actually want to store data, where you have on the need persistent volumes, for example, to actually store that data. So you don't wanna store data within a container because again, the container could fail and then we just gets recreated and everything you did is basically gone. So what we can do on aka.ms on s.htc.hi and Windows Server, we can actually implement persistent storage and the module goes through how this is done, how you actually go and create different, the default storage class and so on. But the real magic, what you need to do is you actually need to create a YAML file for a persistent volume claim, right? So you're gonna write this down, you're gonna say how big that is. You write that in the specific YAML file, you store that and then you can implement this using kubectl to create that in a correspond resource and then actually go out and deploy that. And then in the pot, you can use the following manifests, if you're gonna deploy that, you can then use that persistent volume claim. And so then you can actually get access to this persistent storage, right? So the volume goes obviously through the module, I should say, goes obviously through and there's also how you would actually remove. Now, when it comes to removing, one thing that's important, you cannot remove a persistent volume claim if it's still assigned to any pots, right? Or deployments, so you can see that here. I think that is a good thing to know. So let's quickly jump back to give me a second. Quickly gonna jump to the knowledge check. So with that, let's go and jump here and then move this. Bear with me here for the flow. So let me get question eight up. So for that, because Contoso developers are working on containerized stateful workloads, you want to test the implementation of persistent pot storage by using your deployment of AKS and f-stack HCI. What do you have to define first? So I think I went through this fairly like pretty clear. So you have A, create a persistent volume claim, B, create a storage class, or C, create a persistent volume. So please feel free to answer the question again, you give you a little bit of time to look at it, but I think I highlighted it a couple of times. And I see Vinicius is back, awesome. Yes, can you hear me? Yes, we can hear you just fine. Great. Perfect. So we have a time, so. Yes, so we just basically people are voting and obviously, like I can see also for the people who are voting there, selecting most of them, select the right answer. And what you need to create is a persistent volume claim. Again, if you want to read more, there's more in the module as well. So we have one thing I really want to talk about and we will go with that or I don't want to let you talk about is the implementation of containerized windows workloads. Yes. So this goes back to what we mentioned at the beginning that talks about if you are modernizing existing applications, one of these you want to keep in mind is that not only some architectural changes are going to happen to your application because you want to take out, for example, the connection stream for a database from inside of the application and put that as a string into your Kubernetes configuration, for example. So it's not part of the monolithic application. But also there are some things that might not be able to change. For example, where does the application authenticates to or authenticates against? And in this case for Windows applications, most of the time is at Windows, sorry, Active Directory, right? So is it possible to domain join a container? No, it's not possible to domain join a container. So how does an application running in a container can run and authenticate against Active Directory? The answer to that is GMS8 that you see in the screen there. The group man serves account that basically is an account that is going to be used by the host on behalf of the container. So the host is going to perform the authentication on behalf of the container. In order for that to work, the very, very high-level steps because we're running out of time is your container is going to pass the authentication to your host. Your host is then going to talk to your Active Directory on behalf of the application inside of the container and pass on the authentication token to the container itself. A few ways to deploy this, the module describes the old way of configuring this on which you would have to domain join your container hosts to your Active Directory domain. But now because of AKS, you should not be touching the nodes, you can run that option without domain joining your nodes. The question is, if you're not using the computer account for the node, for the Windows node running your containers, then what account are you using? Well, that's now stored in Kubernetes secret on the AKS on Azure Stack ACI. And then the computer or your node is going to use that secret to go and talk to the Active Directory domain and authenticate the application on behalf of the container. So again, very highlight. I recommend you take a look at this module to understand the architecture of how that works. But this is how you can containerize an existing application running Active Directory dependent on Windows containers. Perfect. So for this, let's skip that knowledge check in terms of time sake and just go to the next module here. And again, I go again because I end of time, I'm not gonna throw through it, but this one goes, this part of this unit of this model goes through troubleshooting Azure Community Service on Azure Stack ACI and Windows Server. Now, one thing, which is covered here, and I think that's very important, if you wanna go in and have a look at the Windows and Linux worker nodes, to actually see if there is something, what is going on on these, you can do that, right? And so you can run, for example, the kubectl get nodes command, and then you will see all different nodes which are a part of that, and then you can actually go and access these. Now, how do you access these? So it doesn't matter if it's a Windows or Linux machine, you can do an SSH connection, and you can find the SSH key here on the cluster shared volume, and basically that path, and you can actually go through and log in with that key. You can also reset the keys with a PowerShell command if you want to. Again, that is something which you should be probably aware of. There's a little bit more when it comes to authentication issues and stuff like that, which we will not dive into for now, but let's actually go and talk about the summary of what we just did today in this session. So in this learn module, we really went to describing Kubernetes and the Azure Kubernetes service, so explain a little bit what that is. We then went through and had a look, what are actually the advantages with the Azure Kubernetes service on Azure Stack HCI and how we also learned on Windows Server as well. So you can run that, the AKS service on-premises. We went through how you can deploy AKS and Kubernetes clusters. We went through how you can connect it and then actually manage it using Azure Arc and that's what the advantage is there. With Vinicius, we had a look at the manage pod placement and multiple Kubernetes clusters. We had a quick look at the storage pieces to see how we have persistent storage and then how we actually can implement Windows workloads containerized. And we had a very quick brief information and there's also only a small part in the module about how to troubleshoot Azure Kubernetes service on Azure Stack HCI or Windows Server. So Vinicius. Yeah, I think we're out of time. So if you wanna learn more, just go to the module, check out all the documentation there. Also recommend to check documentation on the docs page. And with that said, thanks a lot for joining us and have a good day. Thank you very much.