 Hey, I'm here with Matt Max Spirit, and we're going to talk about AKS on Azure Stack HCI, and how you can bring the AKS service into your data center. So stay tuned. So hey, Matt, how are you doing? I'm doing well. Thanks. How are you? Doing great. I'm happy to talk to you today about AKS on Azure Stack HCI. Matt, I always like to introduce the guests a little bit or the speakers. I know that you have worked a lot in the past with Azure Hybrid, especially also on Azure Stack Hub. Can you explain a little bit first about what you're doing, and then probably also tell us a little bit more about our Hybrid story? Yeah, for sure. So hi, everyone. I'm Matt Max Spirit for those that don't know me, and my role currently at Microsoft is working with our customers and our partners around some of our early adopter technologies. So those of you familiar, we've just launched Azure Stack HCI, the new Azure Stack HCI 20H2. So I was part of that early adopter program, early access program, where we worked with a number of customers to test and provide feedback about the technology and make it the best it could be ahead of the first release. We're doing a similar thing for the new Azure Kubernetes service on Azure Stack HCI. So that's part of my core focus now is working with organizations who are interested in running Kubernetes on-premises as part of their hybrid strategy and really help them on-board, understand the technology, and then help us understand how it can move forward and really meet their needs in the future as well. So it's a great area to be in. Hybrid, as you know, Thomas is an incredibly exciting and important area and a huge area of focus for Microsoft. When we think about hybrid from the Microsoft perspective, as you mentioned, Azure Stack has been a core pillar of that for some time. My focus in a previous role was around Azure Stack Hub, and working with organizations around bringing a select set of Azure services to their data centers. Now, Azure Stack, as many of you will know, is a family of technology. So there's Azure Stack Hub, there's Azure Stack HCI as I just touched on, running on a hyperconverged configuration on industry-standard hardware and bringing Azure services on top through the power of Azure R, as we'll touch on. There's also Azure Stack Edge as part of our portfolio, which mainly focus predominantly on machine learning, analytics on Microsoft first-party hardware, that you run again in your location. But all the while with each of these different Azure Stack technologies, we're bringing our innovations from Azure for you to run in your environments to meet your specific scenario needs. So whether that's compliance or regulations, you've got to meet with data on-premises, whether it's to do with network latency, or any number of reasons that are relevant to your organization, Azure Stack, the family bring those Azure technologies down to your environment and that doesn't stop there. There's also IoT to bring in Azure services and integration down to the smallest of devices, whether that's something like Azure Sphere, whether that's drones running on Azure-based technology and IoT integrations. It's an incredible set of innovations. Then finally, with the new Azure Arc, which I say new, we announced it probably over a year ago, but it's still new to a lot of people. With Azure Arc, not only are we providing a layer of control in Azure that enables you to surface your non-Azure resources into Azure for management, for policy, for governance, but we also start to allow you to bring Azure services down to environments that aren't Azure. For instance, as shown on the slide here, we could push data services to an AWS Cloud, or we could bring in servers that are running in Google Cloud into management in Azure. We've got a whole host of innovation taking place that's powered by Azure Arc, and it links in beautifully with the work we're doing around Kubernetes and Azure Stack HCI as well. Innovation anywhere with Azure is a great reflection of everything we're doing in the hybrid space. It's phenomenal. As you know, Thomas, as well. Yeah. Now, I think one of the things I always like I need to explain to a lot of people is they ask me, okay, now you have Azure Arc, so does that replace Azure Stack and all these kind of questions? I think what is very important is that we don't just have a single product which is our hybrid solution, right? We don't have like say, okay, if you do hybrid, that's the solution, right? Exactly. I'd like to buy Azure Hybrid place, too. Not quite that, yeah. Exactly. It really depends on what the customer actually needs. Speaking of customers, one of the big demands we are seeing is obviously that they do application modernizing, using, for example, containers, and so Kubernetes plays a big role in that, and we have some great services in Azure for Kubernetes, right? That's right. Yeah. Kubernetes on Azure has been in place for some time now, and in a variety of different forms. So for instance, if you've got a lot of experience with VMs on Azure, as of many of our customers, so you can actually deploy Kubernetes yourself using our AKS engine on Azure, which will stand up a Kubernetes infrastructure in IaaS VMs, giving you full control and exposure to all of the plumbing of Kubernetes to play with it and configure it as much as you want, but you know what, that's not everybody's cup of tea. Not everybody wants to get into the plumbing of Kubernetes, and you know what they say? There's a lot of Azure services where Microsoft, you handle the complexity of this particular service. Just let me consume it as a tenant, if you will, and or as a user, and Kubernetes is no different. So we've developed in Azure the AKS service at the Azure Kubernetes service. And so Kubernetes on Azure, the innovations we're building into that Azure service, ensuring we're bringing best practices from our customer organizations, from our support organizations, from innovation, from the industry. We're embedding all of that knowledge into the Kubernetes infrastructure. So we take care of it so you don't have to think about that as much. It's incredibly hard, and you know as well as anybody that Azure and security are hand in hand. It's incredibly important to us and to our customers. And Kubernetes, we don't stop at the Azure infrastructure layer. We go into the application layers, the services, and we harden by design. Not only how we build the products, but how we run them in a cloud as well. Azure support is incredibly powerful and useful to so many organizations who encounter issues and they need support and help. And that doesn't change when we bring Kubernetes into that mix. We've got specialists and experts there to help you on your Kubernetes journey to get the best from the platform. And then as I touched on previously, and again something new, and I'm sure many of your viewers are familiar with, the management in Azure, what you can do with policy and governance and all the different controls that are now being exposed again through Azure Arc to non-native Azure resources. Kubernetes fits into that plan as we'll see a bit later on as well. So you get that enterprise control. And but all of this is innovation that up until recently has been exclusively living in Azure from a Microsoft perspective. So if you had any needs to do things in your own data center, then you were really looking at building Kubernetes yourself, which, you know, it's not that easy. Yeah, yeah, absolutely. So I mean, I can tell you like, I know customers really love our AKS service running in Azure, right? However, some of them are telling me, and obviously they come and say, hey, Thomas, like it's great, we love it. We use it for what we can, but sometimes we have some data sovereignty challenges. We have some network latency challenges, or we don't have like good internet connectivity and some locations of our companies and some like, let's say branch offices or even data center locations or factories. And then they ask me, how can I like, I want to use that AKS service like in my own data center. That is what you're here to talk about today. Exactly, that's a perfect segue. Yeah, exactly, perfect leading. So that is where the Azure Kubernetes service on Azure Stack HCI, which may be one of the longest product names that we've ever released, but we shorten it down to AKS on Azure Stack HCI or even AKS HCI if you prefer. But either way, this is a new technology currently in preview, so available public preview. So you can download this yourself and you can run this in your own environment. And what this brings is all of that cool stuff that I just spoke about around security, the best practices, the support, the control plane, the efficient management, it packages it all up in such a way that enables you to then easily and in a very automated way, and those are critical elements, roll this out in your own environment, running on a Hyper-V infrastructure, either running on the new Azure Stack HCI 20H2 or running on a Windows Server 2019 based Hyper-V infrastructure. And so what are some of the benefits there? Well, firstly, we bring all of that knowledge, that great innovation from the Azure Kubernetes service down. There's no point reinventing the wheel here. We've got a fantastic service in Azure. Let's use it and let's bring it to a non-premises environment so you can run it at the edge or in your data center or wherever is relevant to you. It's hybrid by design, but just based on the fact that Azure Stack HCI, the underlying platform is hybrid and integrated with Azure, but then you've also got the Kubernetes layer that has come from Azure. So it's been designed with hybrid in mind and we'll delve into some more of these in a short while as well. It's not just about Windows-based applications, as is Azure in general, there's a significant proportion of workloads out there on Azure that are not based on Windows and Microsoft is fully embracing a non-Microsoft platforms, workloads and more and AKS on Azure Stack HCI is no exception to that. So if you're thinking about this and thinking, well, we've got Hyper-V, but that's just for our Windows workloads, absolutely not the case. Linux runs great on Hyper-V and as a result of that, AKS runs great on Azure Stack HCI and Hyper-V as well. And then finally, that knowledge and that innovation we've built from hardening in Azure, again, we're not gonna throw that away because that's incredibly important and you can learn a lot and benefit from our innovations in that space and bring that to your own environment in a very secure, hardened solution. And so we're gonna delve into each of these in a bit more detail. I'm gonna show some of the demos of some of the stuff as well and show you how easy it is to roll it out because one thing I think we've discussed in the past Thomas is around IT pros and Kubernetes because let's face it, it's containers, containerizing applications and workloads, it probably lends itself a little bit more to the developer audience, historically at least, but with the talk of Azure Kubernetes service running on premises on Azure Stack HCI, that's running on your infrastructure. You know, and you therefore need to be involved in this. You need to be understanding and embracing, but also you've got the tools at your disposal to help control and manage and provide the best Kubernetes platform for your development teams building applications. So for IT pros, it's still incredibly relevant. If not more relevant going forward as Kubernetes comes more prevalent and starts to encroach in your data center and edge locations. I guess you've seen that as well. Oh, absolutely. So this is so funny that you bring this up because I really like two days ago I had exactly that conversation and like people asked, like someone asked me, but why is it like that Kubernetes is just for like people think it's just for developers, right? Is that true? And I was like, no, of course not. Like someone obviously needs to like deploy that and needs to manage it and needs to keep control of it and take advantage of like the security building in that. And this is not just like developers probably are gonna use it and leverage it to deploy their containerized applications, but they should not be the ones managing that platform, right? And I think it's like kind of like in the stages of virtualization where we did the same thing with VMs. We kind of like, if you, I know people are gonna hate me for saying that, but kind of like it's very similar to what we do now with containers, right? Yeah, you're right. And so as you mentioned the IT pro who needs to actually go out and deploy and manage that. I mean, this slide you showed me and promised me some really cool things here, but there is always like, okay, now you promised me that I can run the Kubernetes environment premises. And usually that takes a lot of effort to actually go out and deploy and manage that. So is that different? Well, yeah, I mean, it's on my slide in the middle bullet point, easy deployment. You've got a simple deployment and the nice thing about, and you'll see this shortly, I'm gonna show you both of these deployment types because there are two. For those of you who love PowerShell, of which it should be everybody, right? Everybody loves PowerShell. You can deploy this whole thing through PowerShell. And once it's deployed, then you can work with the development teams for them to consume this infrastructure. But if you prefer to use the Windows Admin Center, which is very popular with IT pros, it's a great, rich graphical experience. If you prefer to deploy through that GUI, use Admin Center. So I'll show you both and we'll compare and contrast the different options there. One is slightly faster. I'm not gonna give any hints as to which one you think might be faster to deploy, but there you go. And as we talked about before, it's that Kubernetes platform. Microsoft is adding a lot of value, a lot of innovation to Kubernetes, but it's also providing that back to the community. So others can benefit from it because it's ultimately an open source solution. But then we also build innovation that's specific to our platform so that Kubernetes can integrate with the Hyper-V layer, the cluster APIs, the storage, the network. And so there's some stuff that we've had to build on top to really gel these things together. But again, we release a lot of this out to the community so others can benefit from the learnings and from the technology. And speaking of benefiting from learnings and leveraging existing stuff, if you're out there thinking, well, I know Hyper-V, I know Azure Stack HCI as well, I'm familiar with Azure, I'm familiar with the Admin Center, this should be right up your street because this is introducing Kubernetes, which yes, maybe you knew to you and it may require some additional learning, some ways that you can manage and deploy applications might be new to you as an IT pro, but the rest of it, deploying the Kubernetes infrastructure using tools like PowerShell, Admin Center, and Azure Portal, that should all be familiar. So leveraging those existing skills is something that we've very much focused on through the development of this. But what's actually getting deployed? So this isn't, I don't wanna hide anything here, I wanna make sure you know that everything that gets laid out, I've got one kind of high level architecture here and then I'll go a little bit deeper into some of the more specifics here. But right at the bottom, we talked about Azure Stack HCI briefly before. So Azure Stack HCI 20H2 is the new release that became generally available in December of 2020. And that is our hyper-converged infrastructure solution to run on industry standard hardware from our great partners, a big wide ecosystem of great partners that have hardware for HCI. And that is the layer that you ultimately deploy AKS on HCI on top of through these automated methods that I'll show you shortly. And what that essentially lays down is, or what it enables you to do is essentially deploy Linux and Windows containers. It's just a quick abbreviation there. Linux and Windows container hosts that we, Microsoft, provide the images for those. You don't need to build those yourself, which is a big win. So we push those down into your environment from Azure. So there's an element of connectivity required to get deployed. We, as I briefly explained a few moments ago, provide that storage, networking and cluster integration. But again, we provide that to the community. And then the rest of Kubernetes is compliant open source components. Meaning, if you did decide, you know what, I'm gonna deploy some applications on AKS on HCI and I need to move them to another Kubernetes distribution somewhere else, maybe in Azure, maybe in another cloud, that's fine. What we're building here is still just Kubernetes. So meaning you've got that portability, you're not locked into a Microsoft Kubernetes or a vendor X Kubernetes, which is really important for applications and workloads and portability out there. And then optionally on top, you've got Azure Arc as we'll see later on to manage those Kubernetes clusters in Azure. And I say optionally, if you prefer to use Kube control or you've got existing investments in Kubernetes cluster management, monitoring and so on, you don't have to use Azure Arc, but as we'll see from the demo, it does provide a significant value at to running and managing your Kubernetes infrastructure. Okay, that is pretty cool. I like what we see here, like we provide the infrastructure with Azure Stack HCI. We have those multiple layers of abstraction and our management tools, which actually help you to set up the Kubernetes environment, right? I think that is something which, again, as you said, there's a steep learning curve when you're just getting started with it and we are basically helping with that. And we also don't do a kind of like just Microsoft version of that. It's actually like, as you said, it's consistent, it works with other Kubernetes. There's a consistency between that. So there's no risk of locking you in or doing something completely closed. It's really like the open source Kubernetes we all know and love. Exactly, the big innovation that we're bringing and the value that AKS on HCI, it goes back to a couple of things we mentioned earlier on around that streamlining and deployment because otherwise it's really complex to deploy. DIY, do it yourself. And then we bring in our knowledge and our experience in hardening and securing. We've got the Azure support that's backing you there. We've got that control plane. So we're taking the base and the core of Kubernetes and we're streamlining and adding to it in so many different ways. And that's our unique value add. But from the application perspective, you've still got that portability of choice of where you wanna move that application to and that consistency also with AKS in the cloud. And to go one click deeper, this is quite a high level, at least zoomed out graphics. So I'll explain what's on there for those of you who wanna know a little bit more. When you deploy, and this we'll see this through demo as well, when you deploy AKS on Azure Stack HCI, that is at a high level, it's a two step process. So we're not diving straight into deploying clusters that you can run your apps. The first stage is actually on the left-hand side of this graphic where you've got your AKS, you've got your Azure Stack HCI cluster and you deploy what will turn the management cluster or alternatively platform services. So you run kind of step one of the installation process and that lays down a couple of workloads, a couple of virtual machines that essentially provide the management control for the rest of your Kubernetes clusters, your target clusters or your worker clusters, if you prefer. So step one is deploying those platform services, which I'll cover in a few moments. And then step two is then you go on to deploy your actual clusters that will run your workloads, your applications, your other services that you're building to be containerized. So in that right-hand side of the graphic where you see the Kubernetes cluster box, there's a control plane for each particular cluster that gets deployed, so load balancer, control plane, VMs, or VM and then you've got your actual worker nodes, which as the name suggests, do the work. They run your applications and they are also VMs. So those of you familiar with Hyper-V with Azure Stack HCI and running virtualized infrastructure, these are still running in VMs but they are container host VMs, meaning in the most case, at least in the worker node sense, and they're running your applications and services that you're deploying. So that's essentially the two-stage process that gets laid down as part of deploying a KS on Azure Stack HCI. Okay, that looks pretty cool. So what you promised me is like that you're actually gonna show how we're actually gonna do this and how easy that actually is, because I still can't believe that it's that easy. Honestly, I promise you it's that easy, it's that easy, but it does take a few minutes. So let's go to my demo slide here and I've actually prepared the admin center, the GUI version and the PowerShell. We're gonna run through both, they're only a few minutes long each one, but I'll talk you through some of the key considerations of both. So let's kick off with the admin center one because I think that's, let's start with the GUI and then we'll, the icing on the cake is the PowerShell one because you'll see how easy it is, does that sound good? Sounds good. Okay, so the first thing to bear in mind with AKS on HCI is still in preview. So this is the preview page, we'll put it in the resources, download the bits. And what you'll find then when you've got the bits is there'll be an admin center extension in there. There's some PowerShell modules. We for this GUI version, we only need the admin center extension and we drop it in a folder. I've just got C drive AKS in this case. So we'll drop it in that folder and from there, within the Windows admin center, which if I close my window here, I'm in the admin center here. And if I look in my settings, and for those of you this is new for extensions, extensions are ways of adding functionality. You'll see I've added that folder path and it will appear in my available extensions list. But I've already installed it. So you'll see there Azure Kubernetes services is already populated as an install extension to Windows admin center, which allows it to do more stuff. The more extensions you add, the more stuff admin center can manage. And I've got an Azure Stack HCI cluster under management here that I'll connect to. And what you'll see, because I've added that extension is I've got the AKS service in the bottom left. And when that loads up, I've got this new wizard that allows me, remember step one to walk through setting up a management cluster of the core platform services that essentially allow me to then go on and deploy the AKS HCI worker nodes or target clusters. So there's a few prereqs that you need to be aware of. We need some space to store some stuff on your admin center box as well, because we download images and then we push them over to the AKS HCI or the Azure Stack HCI cluster to then build the AKS infrastructure. In this release that I'm using, we require DHCP in the environment, but in the future we'll also support static IPs in the next couple of releases. And then Azure Stack HCI, the AKS HCI deployment will actually walk you through configuring opening necessary firewall ports. It'll check all of these different things in terms of determining that you've got the right amount of space, both on cluster shared volumes and a random amount of memory on your cluster structure as well. So these are all things that the system is gonna check for to ensure that you're ready to go. And then once it goes ahead and starts the process, you provide your password, which obviously has to be password one, two, three, otherwise it's just, no, I'm just kidding. And then from there, it's gonna test, have you got enough space? As I was describing before, have you got the right roles and features installed within your Azure Stack HCI hosts? Have you got CSVs? Have you got the necessary amount of space that's required? I think some of the space requirements are a little bit more on the cautious side in this first preview phase. You don't necessarily need a terabyte of space to run all of this, that's for sure. But we just check for what you've got and make sure you've got enough to run the VMs. Now we're into defining what our management or platform services where they will be deployed and the name. So AKS management cluster one, the default, choose a CSV, choose a V switch. And if you're using VLANs, and then if you wanna adjust any of the VLAN, sorry, the low balancer settings for your environment, just for the management services, you can register with Azure, no cost for setting up this particular element of the infrastructure. And then within a few moments, we'll click review and create and then we'll click next. And that is essentially, as I would say that fits in the easy deployment stage so far, I think I'm ticking that box. That takes in this case about half an hour in my environment. And what's happening in that is it's downloading the images, downloading the windows container host image from Azure, downloading the Linux container host image, putting everything where it needs to be on the target cluster. And then from there, it's once it's all deployed and configured, you've then got your platform services, your management infrastructure up and running. So that does take a few moments from there. So that's something you might wanna account for. Not to say it's difficult, just takes a few moments to finish. And then once you have finished, what you're gonna be able to do is retrieve the kube.config file, which is very important for connecting to a managing that's management cluster. Because essentially we have set up a Kubernetes cluster here that's just used for management. You're not gonna deploy any workloads to that. And so you'll see here an example of that kube.config file. And if we open it in our trusty notepad, you'll see a bit of information about the Kubernetes cluster, about certificate information, so we can establish a secure communication. So anybody who's familiar with kube.config will know what the kube.config file is used for. But remember that it's the platform services, the management cluster. No workloads will be deployed to that particular cluster. So once that's done, what other steps do we have to do? Well, it's worthwhile going back to admin center, the landing page there, going into the cluster again. And in this case, we're gonna just check, just to prove that something happened, because I know Thomas, I don't want you to think that I've just kind of imagined all of this and nothing has actually been deployed. So there you see two VMs. In the preview, it's just two VMs get deployed for this management layer. So the control plane and load balancer, they get deployed onto the environment. The rest of the VMs are just there for other stuff. Was that easy with admin center? I mean, do I live up to expectations? They're so far been easy, yeah? Yeah, absolutely, absolutely. Super happy with that. I have to be honest, I'm impressed how easy that was. Like I worked with Kubernetes before, and I have to say, like if you set something up in, something like this up in production, then usually that takes much, much more time to actually do it and make it, especially if you're just getting started with it, right? Exactly, yes, exactly. And yeah, the team's done an amazing job. Absolutely, so, but one thing I have, obviously, as you know, it's probably like, if you do it the first time, I will probably use Windows admin center and do it exactly the way so I can see what actually is gonna happen. But what if I have to do that, like not just like twice, but like if I really need to like five, six, seven times, 10 times, maybe I need to set up hundreds of these environments. You promised me there's a PowerShell version of this. Yeah, I did, yeah, absolutely. But I know you love admin center. You can click through that 100 times. I mean, you could get, once you get the clicks in order, you could do that pretty fast. No, I'm just kidding. But admin center, yeah, the key thing to note here is it's easy. And I only have to do the platform services one time on an HCI cluster. So even if you wanna deploy lots and lots of worker nodes, worker clusters, that management one, I only have to do once, okay? So just to be clear there for folks. But yes, PowerShell, if I'm setting up loads of HCI clusters with AKS and I want the management layer deployed, that's where PowerShell is gonna really help. So let's take a look at this one. So with PowerShell, same downloads, same bits, gotta download those for the preview, just register and download. And from there, we're in the same folder here. I'll zoom in, make it a little bit easier. There you'll see the same bits as we saw before, in this case, I've expanded the PowerShell file. We basically select all of those and we drop them on our target nodes. And I've done that already on your Azure Stack HCI nodes. So just drop them into the regular PowerShell modules folder within program files. And once you've done that, you'll see if I check the modules, in this case, these, so whether or not this is faster or not, depends on how fast you can type. May see all the functions from retrieving information to installing AKS HCI, to creating new clusters, to uninstalling, cleaning up, to updating, and scaling all sorts of different functions there and the versions listed. That's the latest one that we're using at this point in the demo. So the first thing you need to do with PowerShell is do an initialize. And this is similar to what we saw in WAC where it was checking the nodes of the Azure Stack HCI nodes are ready to go. So does it have remoting? Does it have the relevant roles and features? That's all good. So that might take a moment if you haven't got those things installed. You might need to just enable those roles and features. Then we create what's called a configuration and think about this as almost like a template of what you want your management cluster deployment to look like. So we're gonna store our images in this particular folder on the cluster storage. We're gonna store our cloud config, which is the configuration file. We're gonna store that in a particular location as well. And I'm gonna choose to set the Vnet for our virtual switches external. Now, that's the one I've already got in place. So it's gonna connect to that one. Now, if I look at the documentation, you'll see there's a whole other load of things that I didn't choose to specify, whether that's VIP pool information, Mac pools, whether I wanna specify any load bounds for proxy integration, or if I wanna define a specific version, I'm just gonna use the latest. If I wanna change, I don't wanna perform any updates. I've got a lot of flexibility to define how I configured my deployment. And remember, all I'm doing at this stage is defining a configuration file, like almost like a template as I described, which is gonna determine then what actually gets deployed. So that's gonna take a few moments, it's creating that configuration file, and creating the keys. And that is, should be pretty much, or it cleans up anything that's old on there as well, which is always useful. And there you go, there's the new one. So that's not the whole process. Remember, this is just a configuration file that's essentially gonna be used to then install AKS-HCI. So if you had a configuration file that you wanted to apply to all your different nodes, you could essentially copy and paste that one to all of your different nodes and use that as the basis, again, more automation. So here we go, we're doing any cleanup, and I'm not gonna let you sit through all of that, but it took around about seven or eight minutes or so, not very long. Now, one difference here is we only pull down the Linux container host, but there's our cluster deployed, our management cluster. And if I go back to Windows Admin Center this time, you'll see same kind of thing. It's a different cluster environment, but you'll see the same kind of thing, control plane, VM, load balance of VM, exactly the same as what we deployed through the Windows Admin Center. Now, there's a couple of other things we can take a look at. So if you wanna get the kube config file, if you remember before, we did that at the end of the WAC process, we just clicked on the download button. Same kind of thing here, just programmatically that we're doing that. Okay, so that's how you would do that. And that's it. So, does that fit with the easy approach? I would say yes for the win. I give you that win, definitely, definitely. I love how this makes it so much fun to watch. It's like, okay, I create one, and then I create the config file, I take that config file, and I go out and deploy it. Depends on how many times I want it. And I think if I would be an IT pro working in a company, I would tell to my boss, say, I will take care of these 100 cluster setups, and probably take a couple of days off as well. Yeah. But you're right, you're right. And that config file becomes incredibly important. So I'm able to really define exactly what I wanna do and really start to standardize. But remember that that's the management infrastructure, the platform service. So at this point, our developers are still saying, I need my Kubernetes clusters. And we're like, well, hold on. We're just getting the infrastructure laid down first. And then we will go into, again, back to the components, the right-hand side of the graphic here, where we're gonna deploy what's termed here, Kubernetes cluster. Some people call it a target cluster. Some people call it a workload cluster. But it's where your applications will ultimately reside and run. And so in that respect, I've got a couple of demos for that as well. And I've got the easy GUI method again. And I've got the PowerShell method again. So should we go in the same order? We'll do admin center again first. I think that probably makes sense. And we will pick up where we left off previously. So in admin center, in this case, I'm gonna go to the new add, or not the new add, but add. And then I've got a new preview option now for creating the worker nodes, the worker clusters. So similar to what we saw before, a very streamlined wizard. But what's nice about this, yes, it defines the pre-rex what we need. What's nice about this here is, if I go to the Azure portal, which many of you will be familiar with, and anybody who has created a Kubernetes cluster using the actual AKS service in Azure, if I bring up Kubernetes here, just use the power of search and my slow typing, and click on add and add Kubernetes cluster. Just take a look at the steps here. So you've got basics, you've got pools, networking, and so some are obviously very relevant to Azure. But we've tried to actually capture that approach in a similar way through the WAC wizard as well. So you've got the basics, the no pools, the networking. Obviously some of it is gonna differentiate, but we've tried to capture it as close as possible. So right off the bat, I can start integrating with Azure Arc. So some people will prefer to do that after the deployment of the cluster. I'm gonna do it before or during the deployment in this case. So I'm gonna enable that functionality. And then you're defining your actual target cluster that's gonna run your workloads. So in this case, you can call it whatever you like. And then you choose your environment that this is gonna be deployed onto. So in this case, it's gonna deploy to the cluster, the Azure Stack HCI cluster that's got the Kubernetes infrastructure laid down, the management services that we described and showed earlier. Provide some credentials, get password 123. Then I can pick my version of Kubernetes and we provide support for the last couple of versions including the latest. And we try to keep pace with Azure in that respect as well. So you've got that consistency. Now in the primary node pool, we need obviously a group of VMs for system services as well, control plane, low bar. So we can specify the size of those. And then for our actual workers that are gonna run our applications, again, we can choose Linux or Windows as we described earlier on. But I'm gonna create a pool of Linux and Windows. Of Linux and Windows, I can choose the size as well. I'm gonna put them in the same cluster. So the unit of management is the cluster, but I can still have mixed workloads and infrastructure within those. So in this case, I've created a Linux node pool with two nodes and of a certain size. And we've used the Azure nomenclature for sizing. So you'll see standard A4 V2 is there in the dropdown, but you can choose from a variety of different sizes and Hyper-V will create those on-prem as appropriate. And then finally, we define our networking. So whether we wanna specify any more unique IP address ranges that are unique to our environment, adjust the load balancer settings of which we will add more and more capabilities if we go through to the final release. Remember, we're still in preview. We define our persistent storage, which is an important feature for a lot of organizations need that persistent storage. And we've been asked by many customers for that. So that's part of it. And then we click to create the Kubernetes cluster. Remember, this is creating a cluster that our workloads will run on. And that took about three or four minutes to create one-side clicked create. And then same kind of thing is before you can download your kube config file and your SSH key in the same kind of way we saw before for the management cluster, but remember this time, this is for the actual worker cluster. So this is something you might give to your development teams to say, yeah, this is what you need to connect through kube control or through your other management tool that you may already have. And you'll see I clicked the Azure Arc link and it took me straight to the Azure portal. We'll explore that a bit more later. And if we take a look at what was deployed on this cluster, not under Azure Kubernetes service, but under virtual machines, you'll see we've got a whole host more VMs that have been deployed on our cluster. Remember, we just had two before. Now we've got the control plane load balancer and then MD0, MD1 are our different node pools within the same cluster, some running Windows, some running Linux. And so that is essentially an end-to-end infrastructure management services, then deployment of a target worker cluster or using Windows admin center, no code at all in that one. So again, easy box. Absolutely, absolutely. I'm like very impressed on multiple things I just saw. So first of all, obviously, again, it makes us very easy to deploy all of that, as you just mentioned. Secondly, I like the consistency. So even with the, like when we select the nodes and then we select the sizes, we actually keep the consistency in Azure VM sizes, right? So you can actually see it's really, like we really keep track of consistency. And the second thing, I think that is like something for those who haven't really realized that yet. This then takes care of all the storage management, of all the virtual network setups. So I don't have to like fiddle around and say, okay, what virtual networks do I actually need? Like, what do I need to figure out? How many do I need? Like, I can actually just, the wizard guides me through all of that and does then create everything. There's nothing I need to know. Like I remember when we needed to configure the network controller and we had to work to create this virtual networks and stuff like that, that is all done now by that service, right? We take care of as much as we possibly can to make it easy, but also we still give you, remember this is running on your premises. So you still have the ability to be flexible. And if you need to tweak certain things for your applications and workloads, you've still got the opportunity to do that, but we try and take care of the vast majority of that plumbing for you and the integration points so that we make it as easy as possible. So you can focus on your applications. Yeah, so yeah. And so do you wanna check out the PowerShell version? Yes, absolutely. Let's do it, okay. Surprisingly, again, this is a bit shorter. So we continue on from where we were before. So in this case, I'm gonna create new AKSHI cluster, give it a name. And then all I need to specify is how many control play nodes, how many Linux nodes as we saw before, the Windows nodes, I can specify things like sizing as well. You'll see I'm tabbing through a few control play size, low balancer size, Linux node BM size. So I can specify those if I want. Azure AD integration is something that's been worked on as well, but I'm just gonna hit that and go and it's gonna validate what I've got and it's gonna go ahead and create the cluster. So that's gonna create in this case, a one node Linux cluster, no Windows nodes in this one. So that won't take quite as long because we're only deploying fewer nodes as we saw before, but it doesn't take long at all. Now, one thing to note as well is, I know we mentioned VM sizes, oh, there we go, that's done already, there's no ARM templates here. So we're not using Azure Resource Manager to deploy any of the VMs. So this is essentially just running hyper-VVMs on-prem. There you see our cluster. So same version as the management cluster, it's been provisioned. One Linux worker node in there as well. But what happens if I wanna perhaps go to two nodes? Well, I can just run this command again, Linux node count two now. Now I have to include the node count of Windows even if I'm not changing anything. So I added zero for the node count for Windows. And now what's happening is AKS is orchestrating the addition of that extra node. And some of this I've sped up a little bit so it's not quite as fast as that. I'm not that magical, but it's still pretty fast. I didn't want you to kinda sit and watch a yellow text on screen for a while. If we look at our versions that got available, you'll see we've got a couple of different options there for mainly more so for Linux, but also the latest version for Windows which we'll add to over time. And then if I wanna get that config file of my new cluster, I just run that command as well to get AKS HCI credential. So from there, if we take a look, if we clear the screen out there actually and then let's just take a look in WAC again and see what's been deployed, you'll see this time there's just the one worker cluster with, well one worker cluster again, but this time it's just got two Linux nodes and as we added one, no Windows nodes in that one. But we could go ahead and run the same command again and add Windows worker nodes as well. So pretty quick, pretty easy I would say. I think that's another win, that's four for four, right? Absolutely, like keep on impressing me here. One question I had though, while like you showed these demos, obviously we have now the PowerShell setup and then like for example, you showed how you like did adding a node to one, like adding an extra node to it right to the node pool. When I did the setup in WAC for example, let's say I created the first node pool with WAC, can I then use PowerShell to add another node to that node pool or do I need to go back and use WAC? I think if you're down the WAC path, you should go down the WAC path and down the PowerShell path, go down that path. But I think we are working to, as we go through the preview process based on feedback to unify the approaches. So the compatibility is as you would expect, using one doesn't invalidate the other. But I think during the early stages of the preview process, which is what you might experience, it was very much kind of, yeah, use one path or use the other path. And PowerShell does give you a bit more flexibility for certain testing scenarios as well because then WAC hooks onto the PowerShell obviously, so if a bit later, but either way, we're working towards unifying those approaches. And even in the latest releases, which the December update is currently the most recent, but when this goes out, there may be a January release available. Either way, those more recent releases of admin center just added more functionality, we're getting more management controls. So the innovation is just happening so quickly that you'll benefit it from it in no time and the convergence will happen in the near future. Okay, that was perfect. Cool, so that's... Yeah, go on, go on, go for it. The next thing I wanna know, I mean, okay, we showed how you set up all of that. But so now I wanna actually deploy an application. I wanna use that Kubernetes cluster. Yeah, about that, that's tough, but I can show you how easy I can make it for you, okay? So yeah, deploying this, I've got the categorizing this into a simple application. I don't wanna overemphasize this and straight into the demo here then. So what I'm gonna do is essentially deploy a simple application. Let me start this off. I recorded this a few days ago. I'm gonna deploy a voting application where I can vote for cats and dogs and do very simple stuff. So I've copied this YAML file, which is like a descriptor. It's a bit like a template in some ways. It contains the information about the components of this application, the image, the names, the cluster port, the ports it needs, the cluster configuration, the image repository where this is gonna come from. So in this case, the Microsoft Container Registry, resources this app will use, ports that it needs, as I said, for the load balancer, in this case, port 80. So that YAML file is easy to read, easy to write and used as part of the deployment. So in this case, if we zoom in here and make this a little bit easier to see, I'm gonna use kubectl to get my cluster nodes. So this should be familiar to any Kubernetes admin, but IT pros can start to get on board with this as well. It's just listing my nodes. In this case, the two spare nodes at the bottom there. And I'm gonna apply that particular YAML file to the environment, so using kubectl. And now that's gonna go ahead and start to actually create the services that make up this particular application. You'll see I'm checking for them there and I'll see that, okay, it's already been provisioned to a certain extent. It's got a load balancer IP externally, so I can paste that in and then get my voting app just like that. So that was how quick I got my new application from that YAML file. It pulled down the Redis information, it pulled down the necessary container image and it stood it up incredibly quickly. So what's actually happening, let's see what's powering this under the covers. So we get the pods and list and understand what's actually powering this simple application. You'll see there's a backend and a front end as demonstrated in the YAML file that we saw earlier. And if I wanna scale this up, for example, manually in this case, I'm gonna say I actually wanna deploy replicas, five replicas, so five instances up to the front end of this application. And you'll see some of them are running very quickly, just up and running in five seconds. Some are still being created. I'm essentially just refreshing the check of the pods every few moments and sped this up a little bit. That wasn't 30 seconds, but no time at all. I've now got five instances of this application powering this particular voting app. Because obviously if everyone starts to vote for cats and dogs, then we're gonna wanna make sure that we can handle that increased demand. But the premise there that I've shown there is I used an existing YAML file for an application that could have been built by my developers stored in GitHub, whatever. And in this case, we're not using any shared repo for code. I've just been handed this YAML file or my developers have got it themselves. They've deployed it. Kubernetes has read this YAML file, pulled down the necessary container images from wherever it needed to be. So in this case, the container registry, the Microsoft container registry, it started them up and I've been able to scale it up as well using a simple command. And this is using Q-Control, which is gonna be very familiar, but I could also use other, if I've got other investments in technology, management tooling and so on, I could do that as well. So was that a simple application? I think that was a simple application. What do you think? Yeah, absolutely. And it drive me completely off like, when you work with AKS in Azure, right? It's like, kind of like the works exactly the same way. And that is awesome. Like I don't need to learn, like if I'm familiar with Kubernetes or if I'm familiar with AKS in Azure, I just go on and right do it on my AKS cluster running on Azure Stack HCI. So, yeah. Yeah, so that was kind of the rundown of AKS on HCI and everything that is. Let's just quickly go through some of these more pillars and then I'll wrap up with some data, the cool data services stuff. So I think I've got one more demo to show you as well, but we've talked about hybrid, how you integrate with ARC. You saw through the WAC wizard there that you can integrate with Azure ARC as part of the whole process. You can also do that with PowerShell as well. We've got application portability now that you've got your app containerized. You can move it easily. We don't have a live migrate command where you can send it to the cloud and back, but the speed at which you can deploy new instances of applications, they're so portable in that YAML file or the code base, you can easily move things around and share the container registries and AKS and AKS HCI are sharing the same code base. So we're not, although we're adapting it to run on premises and on Hyper-V, a lot of it is essentially the same. So you've got that uniformity and consistency there. So in terms of onboarding, as you saw in the Admin Center wizard, there was a little radio button and then provide your subscription information and so on. PowerShell, you need Azure CLI, you need to register the ARC Kubernetes services in your subscription and then you create your resource group and a service principle with enough permission to deploy a resource within that subscription or resource group. And then the command is simply install AKS HCI, ARC onboarding, and then pass in a few of those parameters, resource group, cluster name, et cetera, subscription ID. You get the idea. There shouldn't be anything new to people, either way, you don't have to onboard to ARC, but it's definitely valuable because you unlock some great hybrid management, which I'll show here. So let me take you through. So this should be pretty familiar to many folks. This is actually what we saw before when I finished the WAC deployment, because in ARC, once we get the cluster deployed and visualized in ARC, I can start to integrate with monitoring. I can start to handle some more of the configuration here, some more of the automation. I can also integrate with things like GitOps and also Azure policy, which I know you've got many other sessions and videos on, but if we take a look at our ARC view here, all of these clusters here, Kubernetes clusters are coming from other locations. And then you've got them side by side with native Kubernetes clusters running in Azure, in AKS. So if we expand this one again, you've got HCI-based clusters, you've got EKS, GKE, you've got all sorts of different clusters. And if I click on one in particular, this environment's got a little bit more information. If I click on GitOps, this is where, and in this case, I'm not applying any configuration files, but I'll show you one where I am. This is where if I'm in my development team is storing their code, their applications on GitHub or a centralized Git repository, I can integrate that with Azure ARC, and that can be my source of my truth for my application. So whenever we make changes to our app and GitHub, it gets deployed automatically. And then I can also apply Azure policy to enable me to start to ensure compliance against certain characteristics for my environment, whether that's related to passwords or certain network configurations or whatever it may be. We've got tons and tons of policies you can apply to your Kubernetes clusters to ensure they stay compliant. And I know you've done a lot of work with Azure policy in the past for servers in particular, Thomas, and it's extremely powerful. And now as an organization, you can really standardize what gets deployed to your Kubernetes clusters with GitOps and then how it stays in compliance using Azure policy. And both of those combined are incredible value and showcase how valuable it is to integrate with Azure ARC. Because unless you do that, you don't get access to those kind of configuration options. So that's around hybrid integration. We mentioned before, it's not just about Linux and it's not just about Windows specifically, although I showed deploying a node pool with Windows container hosts and Linux and then in the second PowerShell demo just did Linux and we deployed an incredibly powerful application onto our Linux infrastructure to handle voting. And because those clusters, those node pools, Windows and Linux can be part of the same cluster, it really helps simplify the streamlined management of those. I'm not have to treat them completely differently, which is valuable as well. And then finally, this goes without saying, security is top of mind for so many organizations, so many enterprises and it's top of mind for Microsoft, both in the cloud and out of the cloud. So we bring all of our innovations that we've built in Azure and that we've enhanced and released to the community for making Kubernetes is incredibly secure. We, Microsoft, provide the container host images that are hardened against vulnerabilities. We provide the updates and patches and servicing to ensure that your container hosts are updated. You're in control ultimately when you apply those, but we build those images so you don't have to worry about that. And we integrate with our security CAs and also Active Directory through GMSA. So for those with Windows workloads who want to be able to sign into container applications, that integration is there as well. And all of this, it should goes without saying how we build our applications and our services is through our secure development lifecycle with security by secure by design and from a user and a tenant perspective, you can integrate this with security center to monitor threats and assess your environment for how secure it is as well. So loads of cool stuff. I know I'm going pretty fast over that sort of stuff, but I'm not gonna show those, but these are things that you can certainly benefit from with AKS HDI. So why would you use it? Well, modernizing apps, going traditional VMs through to containers is the natural next step for a lot of organizations embracing new cloud native applications. But where we'll spend most of the time in a few moments is on the data services because this is really, really cool stuff and how you can bring and benefit from our innovations in Azure in your own environment. But we saw earlier that Windows deployments container hosts was easy through WAC. I could have done just as easily through PowerShell. So if you've got legacy.NET applications that are running on physical or virtual machines that are perhaps oversize, perhaps low utilization maybe, and you think, well, how can I modernize that and get more bang for my book? Well, containerizing that, of which we've got tooling to help with that through Windows Admin Center, you could bring that into the Kubernetes environment and run that on AKS HDI side by side with your other.NET and Linux based applications. And then you're starting to embrace new cloud native applications for your new applications. If you've got developers within your organization who are building the next generation of applications, it's unlikely they're gonna be building monolithic applications of all. They're gonna be building microservices and lightweight stateless and stateful applications that can run in containers and be very portable across and scaled very quickly as we saw in one of the previous demos. So all of that in mind utilizing AKS HDI as that platform and having the portability to go to public cloud as well is again, very, very valuable. And then I touched on data services. So if you want to embrace a PaaS database service today in the public cloud, Microsoft has got a couple of offerings. Azure SQL Managed Instance, we've got Postgres Hyperscale, there's all sorts of other solutions as well in Azure. Those are just two that I want to focus in on. But the challenge there is what if you need to run that database workload directly next to your application on premises? Well, the round trip times of the cloud might be a little bit too much or the compliance, you may not be able to run it in Azure. Well, with Azure Arc enabled data services, we essentially allow you to separate those services like Azure SQL Managed Instance and Postgres Hyperscale, they're containerized and they can run on Kubernetes. And that Kubernetes could be AKS HDI. So you could bring those services down, project them and deploy them down from Arc onto your Kubernetes cluster that's running in your data center. So you get the win-win, you get the benefit of the PaaS service, you get the integration with your existing tools like Azure Data Studio and your familiar database management tooling and they're running on your premises on an incredibly quick to deploy streamlined containerized platform. So, and all of that is delivered with a single vendor support statements from Microsoft, from Azure Stack HDI right through to Arc and the data services. And so when you think about the benefit there, you never have to think about, again, SQL update versions again, because there's no end of life to an evergreen service, it's always the latest version. You saw how quick it was for me to scale up and down with containers earlier, the same applies to SQL. The management is unified because I can now have a single view with my data services we'll see in a moment of on-prem and in cloud, in public cloud resources from a data perspective. Like we saw before, security by design so we're passing all of the best practices for security around policy, around governance, around RBAC, around just intrinsic security within the platform. And for those that want to embrace the cloud billing model, if you're already embracing Azure, then having this on-premises fits with that model as well. So you pay for what you consume versus the traditional perpetual licensing approach, as we've seen previously. So what you see here is an example of a resource group in Azure that's got the Kubernetes service, a Kubernetes deployment in the Kubernetes service, the Arc data controller, which is kind of like one of the first key components that gets deployed in order then to deploy the other Azure Arc enabled data services components. And you'll see we've deployed a Postgres instance, we deployed a couple of SQL managed instances, and they're all running through managed through Azure Arc and they could be running on-premises on AKS HCI, they could be running in another public cloud, but all of them essentially managed and then policies applied, secure security center integration, monitoring integration, all of that stuff from one single port. And this stuff is incredibly powerful, Thomas. You know, this is as well as anybody. This is awesome stuff. And an AKS HCI is just yet another platform you can use on-prem to benefit from this. Yeah, now I love how all our hybrid stuff is coming together because again, we will have all the sessions at this event, talking about the Azure Arc data services. We have a session about how to modernize your Windows Server 2008 and 2008 R2 application to Windows containers. And so this is actually perfect. This really brings everything together here, what we can see. Yeah, and that's right. This slide is really the epitome of that top to bottom supported solution from Microsoft. So right at the bottom, you've got great hardware from our partners running Azure Stack HCI from a software perspective. And then you've got the AKS layer that we touched on that we all now agree is so easy to deploy with PowerShell or Windows Admin Center. And then you've got the data services on top. And data services is one example of one of these services we're bringing to allow you to run on your infrastructure. And I'm sure that won't be the last. And then you've got your applications and services that you build running on top of that environment as well, integrating with the different layers. So it really is an incredible picture and an incredible opportunity for you to modernize in a trusted, secure partnership with Microsoft. And we certainly wanna partner with you on your journey there for them. And so my call to action, what I would definitely ask you to do is, and you saw it at the start of a couple of the videos, is register for the download of AKS HCI, kick the tires with it, try it out. You don't necessarily need a big super powered Azure Stack HCI cluster. You can actually do this on a single Windows Server 2019 Hyper-V node. If you've got that spare in your environment, we're working on some documentation to this nested as well. So you've got options, even if you don't have hardware, install it, you've seen how easy it was. If I can do it, you can definitely do it, really, really straightforward, whether it's PowerShell or WAC, and then check out our docs and go forward from there and embrace Kubernetes from a learning perspective and really help to support your developers within your organizations. That would be my key call to action there for you. So I will definitely go out and download the preview and actually try it out to make sure I can see if it really is that easy as you just showed me. Or if it is, it is a promise you. I believe you, I believe you, but I still wanna have it. I wanna try it out. I wanna play with that and see if my applications, for example, are working on that as well because I'm really doing my first steps, like bring those applications and containerize them. So obviously I will go to that page and actually download it, but do you have a little bit more resources I can learn more? I do, yeah. And one thing to note about downloading as well, we generally release a new build about once a month. So we announce a blog post. We tell you what's new in that particular release. And you can also find details about the releases on the bottom link here on the list here. So on GitHub, we're using GitHub pretty extensively in the development of AKS-HCI. So you can file issues, you can file, you can let us know if it needs a feature that's missing that you really benefit, would really benefit from. And you can also see our roadmap and our release information there as well. And so we release generally monthly, so you'll download the new build and validate while we're in preview. There's loads of great documentation going from the bottom to the top here, strangely. Our documentation is great on Microsoft Doc site. There's a link to the evaluation that you can download yourself. There's some more announcement information on blog posts, but you've got everything you need from there as part of this session. And if you do wanna refer anyone else either to this session or wanna refer them to the product pages, the marketing pages, then the top link there can help you out and just give you the overview of what we're doing here and what this is all about. But they're the key resources, download it, check out GitHub and follow the docs and you'll not go far wrong. Awesome, this is awesome. Again, we will put all the links down in descriptions or you can just watch down and actually click on these links, Matt just showed us. With that, I really, really wanna say thank you, Matt, for being here. I was really a fun session and I learned a lot and I believe you it's that easy now. That was really, really good. I really hope to talk to you soon again. I know there is a lot of things happening in our hybrid space and especially also on AKS on Azure Stack HCI. So again, thank you very much. And for all the viewers out there, if you wanna know more, if you wanna watch more sessions especially on the topics I mentioned, for example, the Azure Arc-enabled data services or for example, how to modernize your existing Windows applications to containerize them, check out aka.ms slash itops talks where you can find all the other sessions and presentations. So thank you very much and enjoy the rest of the show.