 Well, hey, hey, I think this thing is on. How's it going everyone? Welcome to LearnLive TV. Hope everyone's having a great day. My name is Andy Sirwitch and with me, I got a good friend, Karsten Rockfall. How's it going, my friend? Hi, Andy. Nice to meet you again and being here live on LearnLive and with a great topic today, right? Definitely. It's one of our favorites. It's Hyper-V Storage Spaces Direct Azure Stack HCI and everything of the good stuff. Definitely, definitely. So we got a really jam-packed session here. I'm really excited about this one, not only because it's, you know, LearnLive TV, but like you said, it's a great topic, right? You know, and really that topic being like you said, Storage Spaces Direct, Hyper-V and really those core components of Azure Stack HCI, right? There's really what we're talking about today. And I guess before we get into that, maybe we should do just a quick introduction of ourselves just so people know who these two talking heads are, right? So I guess I'll start with myself. So again, my name is Andy Sirwitch. I'm a technical evangelist for a company called Hornet Security. I'm also a Microsoft MVP in the cloud and data center management competency. You know, if you have any follow-up questions, you just want to say, hey, I am pretty active out on Twitter. My apologies. I'll have to spell the last name. So that's my Twitter handle there in the bottom right-hand corner. You know, I have a friend that says I should change my Twitter handle to at Andy Sandwich because that's what they called me in elementary school, but I haven't quite gotten there yet. So, but Karsten, how about you tell us about yourself before we dive into this. Yeah, of course. My name is Karsten Rachfall. It's a bit hard because I'm from Germany and we pronounce some hard letters there. So I'm also on cloud and data center management MVP as you are, Andy. And I'm so fortunate. I'm also an Azure MVP. So I'm a hybrid MVP in the sense of having two specialties, the cloud and on-premises. But to be honest, I'm still an on-premise guy. I'm now an MVP for 11 years. And I hope I get my 12th award in summer. So we will see. I'm sitting here in my cavern as you may see. I was going to say. I'm on holiday and you are sitting at the east coast of the US. So what time is it there, Andy? Well, it is 4.30 a.m. And just to make sure I wasn't blurry-eyed and groggy when I got on, I actually woke up at 2 a.m. But I went to bed at 8 p.m. last night. And it's LearnLive TV. It's such a great platform that I didn't care what time of day it was. So I'm just happy to be here and talk about this great technology. And Carson, it's great to have you on again. And we're kind of the dynamic duo of Azure Stack HCI core technologies today. It's like a deja vu. We had a webinar, I think, three weeks ago about a similar topic. We did, yeah. But now we are live in Microsoft. So let's get started because we have a lot of stuff to talk about, right? We do, yes, definitely. So this session is designed to kind of cover the module that you can see at the URL here, the aka.ms link. Or you can just scan the QR code as well. So again, we're going to be covering the same material that's in this module. So feel free to follow along. And we'll be sure to not only cover the information in that module, but also kind of share some interesting stories, some tidbits and some interesting information. Along the way. So the other thing I wanted to do is we've introduced ourselves, but everybody say hi to Flo Fox, our moderator as well. So Flo's going to be kind of keeping track of the comments. Might be passing through some stuff for us to take a look at. Just an all-around great guy. In fact, I owe Flo an email. He emailed me a couple of days ago. And I haven't gotten back to him yet. But Flo, I owe you an email. So I'll be sure to get back to you on that. So hopefully later today sometime assuming I'm awake, right? So everybody say hi to Flo. He's going to be helping us out in the chat today. Now again, like I mentioned on the title slide there, we're going to be covering the material that is presented in this Microsoft Learn module on Introduction to Azure Stack HCI Core Technologies. And again, if you haven't made it over to that URL yet, it's right there or you can scan the QR code for more information. Now, Karsten mentioned this. We are live. This is a live event. So we're not just two recorded talking heads. We're two live talking heads, right? So go ahead and say hi in the chat. So we're going to do our best to make sure we get your questions answered and interact with you. And yeah, we love being live. I know about you, Karsten. I love live events just because it's like, you know, it's not. These are the good ones. I hate recording. So I'm, I'm, I'm getting so, so picky when I record something. So 15 minutes of recording takes me maybe four to eight hours to do them because I'm always we do them. But let's go on. Maybe we have even some demos. We will see how we, how we are with time. So yeah, two hour topic, learn objectives. Yeah. So the learning objectives for this particular module, again, following along with the module that we linked to. So the first thing we're going to do is we're going to talk about Hyper-V and its components because at a core level, Hyper-V is super critical to Azure Stack HCI and enables a lot of the features that Azure Stack HCI brings to the table. And then we're going to talk about Azure Stack HCI itself and its various components. And then we'll get into software defined storage, which Karsten is going to take us through storage spaces direct and all of the considerations and things to, to worry about with that. And he's also going to take us through software defined networking. So everything in the SDN stack inside of Azure Stack HCI, there's a lot to digest in those two sections. So again, we're just going to kind of continue right on here. So again, going through this module, just kind of as a starting point, you know, before you really start to on your Azure Stack HCI journey and understands where it works, where it fits, you know, what, what's, what environments would you install it in? You really have to understand all of the components under the hood, right? And that's what we're going to be covering in this particular session today. Now, starting with what is Hyper-V? Now, Hyper-V is a feature that's near and dear to me. I shouldn't say feature, it's a role in Windows Server, but it's a role in Windows Server that is near and dear to me. If I go way back in my MVP days, there was originally a Hyper-V MVP before they kind of rolled that up into the Cloud and Day Center Management competency. So I could, I could talk about Hyper-V all day. So, but we have a lot of other stuff to cover. So we'll be sure to continue on here. So again, starting with those core technologies in Azure Stack HCI, what is Hyper-V? So that's the first thing that we're going to cover here. And the thing with Hyper-V is it is the core virtualization technology inside of Windows Server and inside of Azure Stack HCI. So that is the role in Windows Server and Azure Stack HCI that allows you to spin up virtual machines and present them to your network. So your, you know, your end users can consume those services. And if you're not 100% familiar with virtualization, basically what Hyper-V and Hypervisors are allowing us to do is to take a physical server, install the Hypervisor on top of it. What the Hypervisor does is it allows host resources, the resources from the physical machine and carve them off into separate virtual machines. So a little bit of a history lesson here, if we go all the way back to like the days of, you know, physical servers, I actually remember those days and I'm sure you do as well, Karsten. There was a point in time where you'd have a whole server rack full of servers and this physical server was your file server and this physical server was your web server. And the problem with that model was, you know, you might be using 10% of that server's resources and the other 90% was completely wasted, right? So the problem with that is, especially your bean counters, your accountants, they'll look at that and that's not an effective use of company resources, right? So what, you know, Software Vendors did was create hypervisors that allow us to more efficiently make use of all these resources. So now you'll have a physical server that's running four, five, 20, 30, 50 virtual machines, whatever it can handle and now maybe that physical server is more like, you know, 60 to 70% utilize instead of 10% and that's really what virtualization and hypervisors, that's the problem that they have solved for us, but that brought a lot of other advantages along the way as well, which we'll talk about here in the and the rest of the session here. Now, Hyper-V specifically is Microsoft's implementation of a hypervisor and like I said already, it's available as a role on top of Windows server. It's available in Azure Stack HCI. There is also a free product out there called Microsoft Hyper-V Server. It's available in Windows Server 2019. That product has actually been discontinued. They're not going to continue with new versions of that. That said, the 2019 version of Hyper-V Server will be supported until what is it, Karsten? I always forget. It's like 20, 28, 29. At 10 years. It came out in 2019 so 2029 is still a long way to go, right? Definitely, definitely. So it's going to be supported for a long while yet and Hyper-V Server, again, that free SKU is great if you just want to like kick the tires on Hyper-V and just play with it. It's fantastic for that. The other thing that's worth mentioning here is that Hyper-V is available on 64-bit versions of the Windows client OS. So Windows 10, Windows 11. So you can run that on top of client OS. It's fantastic for test dev. I've been known to have a virtual machine running on top of my Windows client OS at any given time. Right now I've got two computers running on my laptop that's actually supporting my lab environment. That's a whole other story. But Hyper-V, again, that core Hypervisor role inside of the Windows Server stack, including on Azure Stack HCI. So what does the architecture of Hyper-V look like? And that's what this slide really covers. We have a nice diagram here on the right-hand side of how that architecture looks. And we've got at the base layer at the very bottom, we've got your hardware. That's your CPU, your memory, your storage. Your hardware, right? And what a lot of people don't know in terms of Hyper-V. This is always kind of an interesting tidbit I share whenever I'm talking about this topic is when you install the Hyper-V role on client OS or Windows Server or wherever, what server is actually doing in the background is it's actually virtualizing that root operating system. A hidden VM. So it almost like virtualizes the host operating system. So that's why you see this root OS on the left-hand side. It's kind of like a virtual machine. You can't see it or interact with it in Hyper-V in any way, shape, or form, but that's the host operating system. And then using the Hypervisor you could spin up additional virtual machines. That's where you see on the right-hand side here the guest OS. So you'll hear that term used all throughout Microsoft's terminology when we're talking about Hyper-V and virtual machines. So guest OS, guest VM, these are just some terms that you can kind of think of as virtual machines. They're interchangeable. Those terms really. So you'll see the guest OS over here and in this diagram you really only see a single guest OS, but you could have, like I said earlier, you could have 5, 10, 50, 100, whatever your hardware could accommodate at that hardware layer, you can run all kinds of different virtual machines on top of that hardware. Now, we're going to make things a little more complicated now. We also have this concept in the Microsoft world, or just virtualization in general, there's this concept of nested virtualization, right? And if you guys have ever seen that movie Inception, right? The whole premise of Inception is a dream inside of a dream inside of a dream, right? That's kind of what nested virtualization is. So it's like I'm going to run a hypervisor on my hardware, and then I'm going to spin up a virtual machine, then I'm going to install a hypervisor in and run virtual machines in there, and it sounds really complicated. If you've done it a few times, it's not too bad. But the question that always comes up when I'm having this particular conversation about nested virtualization is like, why would I want to do that, Andy? Right? Why would I want to run nested virtualization? And what I always come up with is test dev. Now, I'm going to make a little bit of a joke here at your expense, Carson, but if you're unless your name is Carson Rockball here we go. So unless your name is Carson Rockball, you probably don't have a lot of hardware lying around to test virtualization workloads on, right? I guess that wasn't really a joke at your expense, and it was just just an observation that you always have the most awesome hardware to work with in your lab. I'm a little bit jealous, I have to say. Yeah. So Andy, but if I add to that, I use nested virtualization even having a lot of hardware. I use it every other week, and I would like to share that information when you're finished with this part and maybe do a short demo if I think we have time for that. Yeah, that'd be great. Definitely. Yeah, so just real quick here, like you said, you still use it every other week. And again, that comes back to the question I asked of like, well, why Andy? Why would I want to use nested virtualization? And when we're talking about Azure Stack HCI specifically, so let's assume you're watching this session because maybe you're in a place where you might have to implement it within your organization, right? And maybe you don't have, like I mentioned, maybe you don't have hardware lying around that you can spin up Azure Stack HCI on and test it out. So that's where nested virtualization comes in. You can get just a really beefy server or heck, I've seen people do this on laptops. You can spin up a virtual machine in Microsoft Azure that has virtualization capabilities where you can carve off Azure Stack HCI VMs, which you can then cluster and run virtualized workloads on top of them basically allows you to kind of test out even these virtualization scenarios in a virtualized manner on the hardware that you have, right? And like you said, Karsten, I think you've got a demo over there, right? Yeah. That kind of shows this better than I can explain it, right? Yeah. Let me do that. So first, why do I need and why do I use nested virtualization a lot? I didn't talk about what I'm doing for work. I own my own company with my wife and we do implementations of Hyper-V Storage Basis Direct and Azure Stack HCI, but I also do a lot of trainings and I love when my trainees can have a real experience. So they have hardware to work with and to go along with the training. So I use nested virtualization to deploy Storage Basis Direct clusters and Azure Stack HCI clusters for my attendees and even if you try something out, something new and you Kubernetes release Azure Kubernetes for Azure Stack HCI you just can install some virtual machines and in the virtual machines you build up Azure Stack HCI and play with all the cool stuff you have and you don't have to install the hardware all the time imagine I have a training course with 12 attendees, I don't have 12, even 12 two-note Azure Stack HCI hosts or even four-note. So what I prepared here in the demo and I don't know Laurent maybe you can give me a hands up if you can see my screen or otherwise I can zoom it a bit but then the how you call it, the fonts are getting blurry where's my mouse so I unzoom it. Here what you see is a good old Hyper-V manager and this is a tool we still use to manage Hyper-V and also in Azure Stack HCI we can use it. So I have here a four-note Azure Stack HCI cluster one of my hardware clusters and on one note here on the third note I have deployed three virtual machines so you see here one has a weird name it's like Hallenberg Windows 11 multi-user 03 so that's an AVD VM something for another day this is the start of the Azure Stack HCI training courses so we will not talk about that but here we have two other machines and these are storage basis direct notes and these are running on this one Hyper-Viser so if I go into this VM here you see I'm in the VM on the note here you see the two notes and in the VM there are other VMs running so this is nested realization we have we have an Hyper-V node with a VM on it in the VM we have Hyper-V I opened Hyper-V here so I can do it bigger you see it here this is the Hyper-V manager I'm here and here you see there are five VMs running so if I go to this benchmark VM this is now a VM in a VM in is it in a VM at some point you just lose track right you lose like in the like in the movie right you lose where are we now so this is in the VM and I can play around and I could add Hyper-V again so I could enable the Hyper-V role again and put in this VM but of course we lose a bit of performance we lose a bit of CPU performance and also storage performance so don't over do it but it helps me a lot and a lot of people to play around with this with this more complex concepts like an Azure Stack HCI you need a cluster you need a domain controller and so on so you need multiple multiple machines and not many people have multiple machines laying around to play with so if you have a beefy notebook so I'm presenting here on my notebook it's a six core notebook with a lot of RAM you can do nested virtualization there so here I could start the VM you see it's running and just to show a bit of the concept so back to you Andy back to your slides yeah sounds good I'm glad you one of the things you mentioned is when you use nested virtualization to test and demo some of these concepts you don't have to take the time to set up the hardware because I mean sometimes that's the part of deployment that takes the longest is getting the hardware racked and stacked and ready and connected and using nested virtualization in this fashion definitely helps with that so very cool so next on the list here reasons for using Hyper-V and we've kind of been talking about this a little bit already but you know at its base level it's used for running virtual machines and that's some of the core functionality that provides inside of Azure Stack HCI but more specifically for some reasons you might want to use Hyper-V and virtualization well kind of the scenario I mentioned earlier today consolidate that server infrastructure you think back to the old days where we had racks and racks of physical servers it was designed to consolidate these workloads into you know smaller clusters of physical servers right we've talked about this several times already as well providing environment for test and dev I use it for that almost on a daily basis sometimes I think we use it for VDI workloads so that stands for virtual desktop infrastructure if you guys aren't familiar with that particular acronym so that'd be a situation where like you remember the old terminal server days right where you'd have a physical server maybe in the old days that you would have a number of end users logging into using RDP to conduct their work on a day to day basis right VDI allows them to log into more of a client operating system and depending on the VDI deployment you do there's a couple of different options we'll be talking about that a little bit more later today and then you can utilize it for private cloud deployments like we're talking about with Azure Stack HCI so it's a cloud world now right and you've got the Azure cloud you've got a number of different vendor cloud the cloud is everywhere right and the concept of a private cloud is like hey I've got my own cloud and cloud contains all of those those functions and that functionality that allowed me to be agile connect users to my workloads in a number of different ways and you know the new term new ish term I guess I would say is hybrid cloud right so hybrid cloud would be a combination of both on premises workloads and public cloud workloads like in Azure so for example you might be running Azure Stack HCI cluster on premises as your private cloud while utilizing resources in public Azure as your public cloud and that kind of marrying the two getting the two to work together is kind of that terminology of hybrid cloud Andy just let me add something because for us it's it's super clear but maybe not for our audience Azure and Hyper-V Hyper-V is the hypervisor of Azure so if you talk about hybrid cloud you can run the same VM in Azure maybe you can download it somehow and run it on your private cloud on a Microsoft hypervisor or vice versa so there are some great opportunities to move your VMs from on premises to the cloud for a disaster recovery scenario because we have the same hypervisor in essence and we didn't mention that for us Hyper-V guys it's completely normal but maybe not for the audience so here we have a huge advantage with Hyper-V over maybe other companies who don't have this dual world concept like the public cloud and the private cloud I just wanted to add that yeah that's definitely a good point for the real world use case I just had this two days ago so I've got my lab here in the house that I run all my test dev stuff on and I've been vacating that so I can reinstall the current version of Azure Stack HCI on it and I wanted to keep my domain controller around and not have to re-provision my domain so I actually moved my DC up into Azure for the next week or two while I do all the sound premises work so I have a site-to-site VPN between my on-premises location to a VNet in Azure where that DC lives but the only users in my house are my wife and kid they're none the wiser that they're now getting DNS from the domain controller in Azure as opposed to in my lab here on prem right so just kind of so Andy just a short interruption we have a question from the audience Evan wants to know is there a security advantage by using Hyper-V it's a big one right? It is it is a big question and the security advantage really is that you do maintain at the hypervisor layer you think about the way that the Hyper-V works right in that slide I showed earlier where you've got all the different guest operating system the guest VMS Hyper-V does maintain separation between the virtual machines right? So you can't you know at the hypervisor layer you can't bleed out and gain access from one VM to another without the proper authentication and that type of stuff but that's the one thing I always think of when it comes to security and Hyper-V is just how it maintains that separation of virtualization that type of virtualization we have called application virtualization which would be like your containerization there's not as much separation there as you would get with virtual machines but I'm kind of getting away from our topic Karsten is there something you want to add there on the security discussion? Yeah, Microsoft also uses the hypervisor for some Windows 10, Windows 11 built in security features like in Edge I always don't remember the correct word for the application let me just click on Edge we can Yeah, I know the feature you're talking about because basically what it does is it spins up Edge inside of a hidden VM Like in the sandbox so everything you do in the browser can't affect your operating system because it's a read-only environment and there are other features sandboxing where it's used for a lot of security in Windows operating system uses the hypervisor a bit and this provides additional layers to security for our daily use, right? Right, and the thing I love about that I don't know because we are a little bit I think we are a little bit short on time already we are nearly half an hour into there was a session I could talk about this topic all day long We could talk all day, yeah Exactly, so yeah it's a good point we'll continue on with the general features of Hyper-V there's a number of different bits and pieces inside of Hyper-V so let's talk about management and connectivity first, you know Karsten was showing Hyper-V manager earlier which is kind of like your de facto tool for using and interacting with Hyper-V but we also have a number of other tools that you can use to manage and interact with Hyper-V you can use Windows Admin Center which is the new web-based management tool in the Microsoft ecosystem and you also have something in the system center suite called system center virtual machine manager now one of the other big features that Hyper-V brings to the table is portability so you think about all those workloads we're running inside of virtual machines Hyper-V allows you to do things like live migration, storage migration, and you could also like Karsten mentioned moving workloads from on-prem into Azure you have import-export functionalities as well now I wanted to highlight live migration specifically and I had to get the little dancing cat here because live migration is usually the one feature that when people first start using Hyper-V that's like that's the big aha moment live migration is a feature of Hyper-V that allows you to take a running virtual machine keyword here running virtual machine and move it live from one piece of physical hardware to another and the VM doesn't go down it stays live and running the end users that are consuming the applications from that server they don't even know that the virtual machine has physically moved hardware and what's great about this is you think about patching in downtime situations, maintenance in the old days we used to have to schedule downtime for two in the morning okay take the server down patch it, update it, bring it back up get it gone, if I have to patch a Hyper-V host inside of a cluster or an Azure Stack HCI host which we're talking about in this particular session I can simply move those virtual machines patch the physical host that I'm working on bring it back up move the virtual machines back, do the next host and just kind of go down the line so live migration really enables a lot of I guess quality of life enhancements to the IT Pro, what do you think Karsten? and let's add to that what's really mind blowing is if you think of live migration most of the people think of clusters and Microsoft can do that from Windows Server 2008 R2 ongoing but with Windows Server 2012 and when 2012 is nearly out of support so it's already 10 years we can also do live migration between Hyper-V stand alone host so you can move a VM from one stand alone host where the VM is on local storage to another stand alone host but we also have local storage of course you have to take your data with you, the data of the VM but this is mind blowing so for getting a VM from a single node into a cluster or from a cluster to another cluster there are endless possibilities for live migration and as you said your VM is running so service is always up but okay what's more exactly we'll keep going on here sorry we don't have the time to go all of the steps that are in the module so there is additional information in the module we are skipping a lot of things here right? I wish we had time to go into every little detail but we'll continue on here so we're kind of talking about what Hyper-V is let's talk about system requirements really quick here and really the key pieces that are required to run Hyper-V on top of a piece of physical hardware is you need a 64 bit processor with second level address translation you need the virtualization technology from a CPU so if you're on the Intel side you need Intel VT or if you're running an AMD processor inside of your physical host you need AMDV these are features that you may have to go into the bios of the system and enable I think it's been a while since I've had to do this I think most manufacturers are enabling these by default these days you probably know that more than I do Karsten but on servers they are always enabled and I think also on work workstations and notebooks they are enabled nowadays it's been a while since I've had to go and flick it on so I think that should be set out of the box for the most part but it's something to double check you also need sufficient memory for the host and guest virtual machines so how much memory depends on your particular use case and then you also need data execution prevention enabled and again depending on your processor that may be Intel XD or AMD NX inside of your bios now if you just want a real quick and dirty method to figure out if the system meets the requirements you can use systeminfo.exe from the command line and it will give you that information in terms of nested virtualization there's a couple of things that you need to be aware of so for nested virtualization work you need to be running Windows Server 2016 or Windows Server 2019 or Windows Server 2022 as well the functionality is also available on Azure Stack HCI and that's for both the host and the guest operating system so key piece to keep in mind there so again kind of the same things that we talked about earlier you need those virtual machine extensions enabled you need extended page table capabilities in the physical host and then on the guest VM itself so you have your host ready you're running Hyper-V on top of the host you have a virtual machine that you want to run a hypervisor inside of in Windows admin center there's actually a graphical UI to enable the virtualization extensions for the virtual machine or you can set it from PowerShell by using the set VM processor commandlet here and expose those virtualization extensions again that's for the guest operating system in order to install the Hyper-V server role you can use Windows admin center, server manager the typical tools that you would use to install roles on Windows Server or you can use the install windows feature PowerShell commandlet as well so time for another check that's right the first knowledge check we have here you can scan the QR code to go and answer this question and we'll give you a couple of minutes to do this but the first question here is which the following is not required to implement Hyper-V on a physical server keyword here not required words not required so the other two are required right you can only choose one of those two are required one is not and we want to know which is not required let's see let me see the poll yeah we'll give people a couple of minutes here probably 30 seconds to take a look at this but yeah it's 64-bit processor with SLAT and VM monitor mode is it guest virtual machine must be running server 2016 or newer or is it DEP pretty sure I know which one is here I think so again not required Andy if we both didn't know that we immediately got rid of our MVP award right I know right I'm not tempting you which one is right so I guess we'll go ahead and take a look here and our answer is B the guest VM must be running server 2016 or newer that is not a requirement for running Hyper-V because you can run it on 2008 2008 or 2012 I remember those days I even got DOS virtualized on Hyper-V you have to do some additional steps but it was possible I don't know if it's still possible but we can go way back with our virtual machines because the operating system is not supported anymore and you can do Linux very old Linux so the next one so where can an administrator obtain integration services for Windows Server 2019 Hyper-V guest virtual machines integration services are those drivers and services that live inside of the guest operating system right and important is the version here 2016 there was a change in older Hyper-V another answer was correct here but nowadays it's much much easier you remember maybe there was a file included right you had to add something and install something so nowadays it's much much easier definitely so we'll go ahead and show that the answer here of course is Windows updates I remember the days when you had to actually load up the installation media and well connect the integration services ISO right you had to connect it and then to install it and now because the integration services in the VM are kernel drivers and if you install kernel drivers you have to reboot the system and you do updates anyway and you get the new versions wire update you have to reboot the system anyway there is one exception to that there is something new in Azure where you have hot hot patching about for the normal virtual machine for the normal operating system you have to reboot it and that's the right place to upgrade those drivers so let's go on with the next session because we are 35 minutes into our our session and we have one module and this was not the biggest one right I know you've got we got a lot to cover in the storage and the networking section so this section is fairly short but this part we're focusing specifically on Azure Stack HCI itself and in the module you actually hear word of this fictional company Contosa right and if you're a veteran of Microsoft certification exams I'm sure you've heard of Contosa before but really most organizations and businesses they're trying to provide high availability for those mission critical workloads right and that's really what Azure Stack HCI provides and one of the talk about the reasons for using Azure Stack HCI and I think before we go a little bit further on this there's one thing I wanted to cover and that is Azure Stack HCI really is the culmination of all of those on-premises technologies that we've been using to date right you're going to talk about storage spaces direct and software defined networking here shortly we've talked about Hyper-V already and I think what I've seen happen and you probably run into this as well Karsten is that some people who are really die hard on-prem folks they may just look at the term Azure Stack HCI and just assume that it's it's some Azure thing right and really it's like I said it's all those components that we've been using on-premises all along now rebranded Azure Stack HCI this cohesive package and the reason and it's running at the sites of the user so the customer it's not something that is running in Azure we leverage we can leverage Azure services of course to enrich Azure Stack HCI but we don't have to there's only one thing we have to do we have to register the cluster in Azure and you talk about that maybe it's my hardware it's standing in my data center in my environment and I care about something in Azure where Microsoft cares about and so on it's still my thing and that's important when I talk to people some assume it's an Azure service it's running in an Azure data center it's not it's on your premises you have to care about it it's your hardware and so on exactly yep and I always like to preface the Azure Stack HCI conversation with that just because it confuses some people they just assume it's a service running in Azure somewhere and so back to the reasons why you want to use this knowing that it's in your data center you're going to run virtual machines on it Windows, Linux you may want to run some containerization work so you can actually run Azure Kubernetes service on-prem using Azure Stack HCI same thing with Azure Virtual Desktop so these were traditionally born in Azure services that you can now run on-prem using Azure Stack HCI so let's talk about the components the different pieces we've talked about Hyper-V already but we have a nice graph a diagram here on the right hand side that kind of shows a simple two node cluster you got node one node two these are kind of your physical servers right these are the physical servers inside of your cluster and then down at the bottom you've got that clustered storage pool and Carson's going to talk more about storage space is direct here in a second but basically Azure Stack HCI is leveraging the in-chassis storage on each of those nodes and clustering it across the two using these dedicated networks you see in the center of the diagram here so across these dedicated networks you've got cluster traffic and you've got storage traffic east and west between the two nodes and then on the outside piece of the diagram here that is basically your production network right how are your clients connecting to the workloads that are being hosted by Azure Stack HCI that's what this network does and we'll talk about that a little bit more in the software defined networking section later in the module now I'm not going to go through everything on this slide there's a lot of text I know but really the one key piece I want to talk about here is in the nodes section and you know failover clustering is the service that Azure Stack HCI uses for high availability between all the physical nodes in the environment and failover clustering itself can support up to 64 nodes but the really key piece to remember here we're talking about Azure Stack HCI specifically is it only supports up to 16 nodes in a cluster so important bit there in terms of number of virtual machines Azure Stack HCI a cluster can host up to 8,000 guest VMs you can run up to a thousand virtual machines on a single host assuming your hardware can handle it right a couple other key pieces here I kind of mentioned already you got your clients that connect to your services that are running on top of Azure Stack HCI you got all your various networks your stores network your cluster network your production network a lot of different networks right Kersh will be talking about that here shortly the other key bit that I wanted to mention here on the second component to Azure Stack HCI is the clustered virtual machine role and I found Karsten maybe you've run into this as well I found that this this terminology kind of confuses some people because failover clustering as a service in Windows server and on top of Azure Stack HCI refers to a a service that it's hosting in a highly available fashion as a clustered role and a virtual machine is no different so when I want to run a virtual machine in a failover cluster in a highly available fashion Windows failover clustering Azure Stack HCI sees it as a clustered virtual machine role so that's kind of the key terminology to keep in mind there yeah to add here you can run virtual machines in a cluster on the nodes on the high available storage without putting them in the cluster yeah and there are use cases for that but then the cluster is not aware of this virtual machine so if if it moves VMs from one host to another because you shut down the host it is not aware of this virtual machine because it's not the clustered virtual machine role and it doesn't move it yeah so it's important that your virtual machines are clustered roles but the concept is a bit a bit shaky yeah right sorry yeah no no worries no it's good insight definitely so a couple of other things here to cover is resources so you might have other resources other than virtual machines inside of your cluster so that would be things like networking storage cluster storage so any highly available storage or in the case of Azure Stack HCI you have storage spaces direct as well Carson will be talking about a little bit more here shortly and then the final thing that we wanted to cover as part of the introduction to Azure Stack HCI is the concept of quorum so quorum basically represents the number of components inside of a cluster that have to be available for the cluster to be online and really the core thing to keep in mind with quorum is basically I always explain it through the terms of a two node cluster so I've got two nodes inside of a cluster they're working together to host virtual machines now completely ignoring quorum for a second you could run into a situation where one node thinks it's the only node online and the other node thinks it's the only node online and they're both like whoa hey my buddy is gone I need to bring up all these virtual machines to keep everything up and running and now you have the same virtual machine running on two different machines and it's a whole thing right what quorum does is we add what's called a quorum witness to the cluster kind of a a third component that has a vote in whether or not the cluster is online and that helps prevent these kind of split brain situations right that I just kind of described and what you can do with quorum what types of witnesses that you can use to kind of act as this third vote are things like file share witnesses or a cloud witness a file share witness is basically something that you can figure inside a failover clustering that says hey I want to use this external file share somewhere to act as a vote in my cluster quorum so this could be it's on a windows file share somewhere in my lab downstairs I'm actually using SMB share off of a a netgear ready nas device that acts as my file share witness for my cluster the key bit here is you want to make sure that that file share exists somewhere outside of the cluster you don't want it living in the cluster because then if the cluster has issues it can't reach the the file share so yeah you want to keep your file share outside of the cluster somewhere your other option here is using a cloud witness inside of an azure storage account this is a great option the only thing that I always suggest people do in production use cases if you're going to use a cloud witness you're probably already utilizing azure cloud services in some way shape or form and if you're going to depend on a cloud witness for your cluster quorum you probably want to have redundant internet connections of some way shape or form right Karsten I'm sure you've run into that yourself right yeah especially I'm it's not it's not in the focus of this of this presentation but there's one great feature called stretch cluster in Azure Stack HCI and if you have a stretch cluster it's so important that you have redundant internet connections to your to your witness so now it's knowledge check time again it is it is so what's the question and the question we've got here is what is the maximum number of nodes supported by Azure Stack HCI and it was funny when I was going through this when I was going through this my knee jerk reaction was to say one thing and it was actually another and you probably know what I did here with that we're not asking we're not asking about the maximum numbers of high PV nodes and a high PV cluster it's especially about Azure Stack HCI nodes and that's quite different so give the people another 30 seconds maybe to vote to scan code here or go to the poll and then we we will give you the right answer exactly it was it was so funny because you know I'm just like oh that's easy I know that and then I hit next when I was going through the slide that kind of like oh oh wait I had that was one question that I cover and that's weird so maybe you blend in the the right answer so it is 16 so I did the whole thing where I'm like oh yeah I can do 64 nodes in a failover cluster no we're talking about Azure Stack HCI which is 16 nodes flow is the flow our moderator is adding something he said he remembered where Karsten and Bernard a Microsoft employee built the world's largest Hyper-V cluster out of notebooks I think it was 55 notebooks in an Hyper-V cluster those were the great times where we had some IT camps it was called in the days where we showed the great Microsoft technologies to people in a one day event so but let's go to the next question it's about the Quorum it is yes so what's weird here Quorum and Witness right people always get this wrong Witness is another another vote and Quorum if you have more more votes so a cluster has to have Quorum so more votes are online than offline right correct yes so which Quorum Witness can an administrator implement by using the USB drive and Azure Stack HCI failover clustering so Disk Witness Cloud Witness or File Share Witness and this is a tricky one because a USB drive drive it's not obvious here so if you think a drive like disk you may be wrong so I'm going to give it 10 seconds here and you didn't mention the Disk Witness I was thinking should I add Disk Witness to the two witnesses you were talking about with Azure Stack HCI so Disk Witness was not mentioned there and that's for a reason so what is the correct answer and so our correct witness correct answer is File Share Witness because you're thinking USB drive like hey I have this disk I'm going to plug it in and it's a Disk Witness right well that's something else in failover clustering and Azure Stack HCI we can use a File Share Witness so you can utilize a USB drive in that fashion and the Cloud Witness the Disk Witness is not supported it's only for a cluster with sand storage and that's maybe a segue to my presentation now now I will cover more of the presentation so let me see yes it works so it does now we are talking in this part and I will lead this part and Andy will chime in with useful information we will talk about software defined storage and software defined storage what is it so in the old days let's old days of virtualization and that doesn't mean you can't do that anymore today but in the old days we had when we had a Hyper-V cluster we had sand storage so an external storage system that has network connections like Ethernet connections with iSCSI we can use that or fiber channel connections and every Hyper-V host is connected to this external storage and you buy a hardware it's a bit of a black box concept you don't know how everything works internally and we have multiple windows and this is a good solution but nowadays software defined storage means we have our servers with internal drives that Andy mentioned already so you have additional drives in your servers can be number of them can be four additional drives we need at least and up to let's say 40, 50, 60 I have even seen storage basis direct implementation certified systems with 100 drives per node and then we use those drives to build a high available storage solution so we don't need external storage we have everything we need in our Azure Stack HCI nodes so let me see this is not so we talk about software defined storage here there are multiple parts that we need for that we use storage virtualization so we don't have a hardware storage system we use storage virtualization to separate our storage management and presentation from the underlying physical hardware and it will be a little bit clearer if through the presentation software defined storage implement virtual workloads no longer requiring configurations like LANs if you have a sound storage system you usually present LANs logical logical volumes or physical volumes to your host and your virtual machines live in those in those LANs and you have to present every LAN to every Hyper-V node so let me quick draw a bit we have a sound system here and then you create your LAN and let's say we have two Hyper-V nodes and then the Hyper-V nodes are connected to our sound system directly over a network a storage network and then our VMs live here and the data is here in our LAN if we look at software defined networking on the next slide we can use storage spaces and storage spaces was introduced with Windows Server 2012 so it's not a new concept it is in 2012 2012 R2 2016 2019 and now in Azure Stack HCI and also in 2022 Server 2022 so in essence you have your local drives if we look at a single node let's say we have a single node you have local drives in there you have your C drive for example then you have additional drives here and instead of using a rate controller to create a high available storage so in the meaning of that if one drive fails your data is still there you don't use a rate controller you use software software defined storage spaces to say this is a storage pool so all the disk get into a storage pool and then we can carve out of the storage pool virtual disks and our operating system these virtual disks we can create partitions on them we can create volumes on them and the data in the virtual disk is spread over these physical disks so we can have a mirror we can have other types of resiliency like parity or we can even have a mixed thing between mirror and parity and so on so a storage pool we put all our disks in a storage pool and then we can create out of the storage pool our spaces and our virtual disks here and we can have multiple so we can have one virtual disk with a mirror for example with a two way mirror so the data is on always two times on our disks but not on the same disk it's also using different disks we can create two copies of the data so we can leverage the performance of all four drives not only two for a mirror a rate controller would use only two of them with a rate one for example and we can do parity in this example with four disks we can do double parity so two way not two way mirror it's a double parity sorry double parity over multiple disks you know parity like rate five or rate six they have parity information on different disks so let's go to the next one why should you use storage spaces are usually in Germany at least everybody thinks if they have single server they think rate control by default rate control right Andy you have the same right yep same over here if I talk to people about storage spaces even in a single note they think why I have my rate controller and it works for let's say 15 years and it's great yeah it's great but storage spaces is even better because we have some great features here I talked about already about the increasing storage resiliency level we can have a mirror we can have a three way mirror so we have three copies on three different disks with a rate controller to be honest maybe there are rate controllers out there that can do that we can do parity okay but we can also have virtual disk that that do both that do a part of the virtual disk is a mirror and a part is a double parity or parity why should you do that you ask why go this concept because the mirror is very fast for writing but it has not the best efficiency with a three way mirror you have only 33% of your disks you can use for data so if you need a 10 terabyte volume you have to have 30 terabytes of devices and with parity you have a better use of your better efficiency for double parity it's 50% with four drives and it's getting better with more drives or if you do single parity like rate 5 it's even better it's with three drives you have 66% with four drives you have 75% so doing this multi resilient storage space volume you have a fast landing zone for writing but you have also a zone for your cold data where you have a much larger space so we can do something like tiering even in a single server so the Windows server or Azure Stack HCI writes the data into the fast tier and then it moves if there is not enough space anymore it moves it to the other tier so this improves our storage performance that's really great and we can have different types of drives we can leverage SSDs and NVMEs very fast storage but they are of course a little bit more expensive than hard drives so we can have SSDs for our fast storage and we can have HDDs for our slow storage and we can all do that with storage spaces and even do this mirror celebrated parity so we have so much options here to design the right storage for our needs so I love storage spaces and if we go storage spaces in a cluster for storage spaces direct it's even better so improving storage performance I already talked about that increasing storage efficiency I talked about that and there is another feature it's SYN provisioning so in storage bases you can also use SYN provisionings you say I want to have a 5 terabyte volume but it shouldn't use all the space I need for the 5 terabytes so imagine 5 terabytes two way mirror we have 10 terabytes so 10 terabytes are gone but if we use SYN provisioning only the data you put into the volume is used so you have a lot of space left there you can create your 5 terabyte volume but it will only use maybe some 100 gigabytes because the data you put into the volume only uses some 100 gigabytes and then if you delete something it will be freed because it uses a trim feature so this is great if you don't know how much storage you really need in the end how much VMs you put on the storage so you don't have to buy all the storage up front you can leverage SYN provisioning do bigger volumes and then if you need more space you can of course add SSDs and VMs whatever afterwards the only thing you have to watch out with the thin provisioning is you don't over provision your storage you can over provision using thin provisioning you just want to make sure you don't actually run out of disk space you have to keep a close eye on the disk space utilization ask me how I know sometimes I think Andy you gone down this rabbit hole I have done it and there are so many layers if you look at sound storage there is deduplication and everything we have that in windows 2 we have SYN provisioning then we have with our virtual machines you have dynamic virtual disks they can increase then you have sometimes using people deduplication inside the VMs so you have multiple layers where you can run out of disk space right so you have to monitor it you're absolutely correct this is this is a danger when we not when we use something like that it's a danger that you over provision all this stuff another concept we need so storage spaces we talked about you can do that with single nodes you can do it it's a base technology we use an Azure Stack HCI but Azure Stack HCI is not single node we need at least two nodes we need a cluster for that you talked about the concept a bit so a storage basis is for both but if we present storage in a Hyper-V cluster and Azure Stack HCI is in essence also a Hyper-V cluster we want that our virtual machines can run on different nodes and always connect to the storage so we need something that are not local disk local presentation of the storage we need something where the storage is presented to every node in our cluster and Microsoft for that uses cluster-shared volumes and cluster-shared volumes are important because of the file systems we use in a Microsoft environment is NTFS or with Azure Stack HCI we use ReFS and these file systems are not per se cluster file system so if different virtual machines or different nodes right into the same file system it's not a cluster aware file system it can happen that the nodes change the metadata because they extend a file or something and then the metadata is overwritten by the nodes one node changes the other node change the metadata and something is lost in the process so we need a cluster file system and in the Microsoft space cluster-shared volume are those so we have one owner node who is responsible for all the metadata changes in our volume but every other node can also write into the volume and read from the volume but if there is an operation that requires a metadata update or change the owner will do it for the other node so a cluster-shared volume is a cluster file system we need for our high available virtual machines and the reasons for that I already mentioned if we want to cluster hyper-vvms we need a cluster-shared volume and there is another concept a scale-out file-share hosting application for data accessible through SMB3 and in essence the scale-out file server this is an option we have with Windows server we can do storage basis direct with the scale-out file server but it's not an option in Azure Stack HCI scale-out file service only available in storage basis direct but not in Azure Stack HCI Microsoft has a great network protocol for file access it's called SMB you heard of that already most of the people heard of it and they use it on a daily basis it's called server message block SMB and in Windows server 2012 Microsoft introduced SMB3 so SMB has a long history in the Microsoft ecosystem so for example with Windows server 2012 we got introduced SMB3 has all the great features we need for using virtual machines on shares so that the data of a virtual machine is living on a share before that we couldn't do that there was SMB2.1 in Windows server 2008 this is not supported by the way so every supported Microsoft operating system can leverage SMB3.0 and beyond but there are of course other operating systems and we have a long history in Microsoft going back to Windows for work with 3.11 this was in 1993 so in the last is it millennia I don't get the word correctly but the history is far back when SMB started it was not called SMB there but the roots nowadays we use SMB3.1.1 and SMB3 has some great features that are especially useful for Hyper-V virtual machines so let's talk about some reasons why we should use SMB3 and of course Andy I think you use SMB3 a lot oh yeah I mean if you use Hyper-V these days you're using SMB when you were talking about SANS earlier and the way that we used to do things in the data center where you'd carve off a loan and then you'd provide access to that loan across your storage network whether it was iSCSI or Fiber Channel I think about how we do things today with Azure Stack HCI and failover clustering versus then and storage spaces direct today it seemed a lot more complicated back then than it does now right so I mean leveraging SMB for our cluster storage today is so much easier than it was managing a storage fabric back in the day right just by opinion yeah what I think when I talk to a lot of people about SMB they always think client servers so Windows 10 and the file server but SMB can do so much more and all the great new features are for Hyper-V and using high available storage for example we have more than Hyper-V over SMB this is for me of course this is the main usage for SMB3 of course but we also can do SQL over SMB so imagine you have a SQL cluster for example you want to put your SQL databases on a central high available storage system this is not this is not the main scenario for SMB high availability today there is a SQL HA so always on you have your data on every SQL server and you have a kind of replication but there are also customers who want to store their SQL databases on a high available storage system and for that we have also Microsoft SQL server over SMB but the main case really for SMB3 is Hyper-V over SMB3 and there are some some amazing features in SMB3 we I think no other storage storage protocol has today and SMB has it quite for 10 years no not for 10 years yes for 10 years for example SMB multi-channel this is really a big one I don't think any other storage protocol can do that without the help of external features so what is SMB multi-channel it is redundancy so if we have multiple paths between an SMB client I think Hyper-V and then SMB server let's think scale out file server where our VMs are living if we have multiple network cards SMB multi-channel will automatically detect those those network cards and use them and we don't have to configure teaming so LBFO teaming to create one virtual card we don't have to configure MPIO these are the external helpers I was meaning when I said SMB multi-channel don't need them of course you can use FiberChannel or iSCSI and use MPIO to leverage multiple passes between your server and your storage but with SMB multi-channel it's built in and it's done auto-magically I love the word auto-magically it really works magically the protocol finds redundant ways to to the server and use them and then if you lose one connection if you have multiple you can lose one there is redundancy built in by default so nothing will happen to your workload unless you have one connection left so we have our network fault tolerance we have our multiple usage of the ways and even if you have only one network card between the nodes and you have a system with many cores when you do SMB 3 over TCP IP older SMB implementations up to 2.1 only leveraged one core to put the data between the nodes so you can leverage only one core but one core can't do 25 gigabit of data movement it can honestly can do maybe 5 gigabit and then it can't do more because we have cores on both fights so multi-channel even if you have one 10 gigabit card or 125 gigabit card it will open multiple connections between the nodes and leverage multiple cores on the sender and the receiver to get your data between the two hosts and that's amazing it's really amazing I'm still amazed about the possibilities with SMB multi-channel I have a video about that but I can't show it now because we don't have the time for that speaking of time let's go on with our presentation so scale out file server I mentioned it one scenario for storage basis direct is a scale out file server and this is this part of the slide here so we have four nodes here with internal storage you see here there are four disks in this node in every node and we can build a high available scale out file server so our virtual machines that are running on a separate cluster so we have two clusters here we have one hyper V cluster and we have one scale out file server cluster leveraging storage spaces so two clusters here and the VMs are running in this cluster and they leverage as the SMB 3 protocol all the cool stuff SMB direct I skipped over SMB direct so I will go back a slide in a minute SMB direct SMB multi-channel to connect to the nodes here and this is really amazing so this is this aggregated model or a converged model this is not available in Azure Stack HCI but with Windows server you can do a storage basis direct cluster and offer this as high available storage for your VM workload and let's look at I thought I added something here but it's gone so SMB direct what is SMB direct I mentioned with SMB multi-channel we leverage we leverage TCP IP for SMB 3 to transport our data to the other side and we need we need CPU performance for that we need a lot of CPU if we have imagine 25 or 225 gigabit connections or even more I have in my in my company we have 100 gigabit switches and I have some 100 gigabit cards so imagine you would do that with SMB 3 over over TCP IP if one core can move 5 gigabits of data and you want to leverage 100 gigabit you would need 20 cores to move the data between the two nodes so on every node 20 cores and you would say are you crazy 20 cores to move the data is there not something else yes there is something else SMB direct so SMB direct we have our memory we have our kernel here with the TCP IP stack yeah and then we have our network card so if we if we move and here we have the same of course if we move data from one node to another to the memory we go through the kernel there we use the cores then we go over the network and go again through the kernel and use all the CPU power to move the data so if we have RDMA cards so let's get rid of let's get rid of this lines so if we have RDMA enabled cards RDMA enabled cards and SMB direct is SMB 3 over RDMA RDMA in essence the network card uses DMA direct memory access to grab the data out of the memory transfer it over to the other card and put it directly into the memory without going through the kernel and without using CPU and that is something that has Microsoft implemented till windows server 2012 other vendors are now adopting RDMA for their operating systems but Microsoft has it now for 10 years and they are really good with it so if you have the opportunity leverage SMB direct in your clusters so reason for scale out file server and see we have just 14 minutes left so I will speed up a bit yeah yeah you're right so scale out file server it's not available in Azure Stack HCI so to get our whole content cover I would say read for scale out file server in the documentation and I will skip the part I will add this guest clustering what is guest clustering so if we have an Azure Stack HCI cluster having VM server and you want to have a high available application running in VMs for example let's say exchange yeah there are still some people that don't leverage office 365 they are still for some reason exchange server on premises but they want to have an high available exchange server so that's the exchange how you call it a dark so DAG so we have two exchange servers or more running on different nodes and they copy the data over to another node that would be one scenario for guest cluster there are other guest cluster scenarios for example putting a scale out file server as a guest cluster onto an Azure Stack HCI cluster why should you do that why should you put a high available file server scale out file server into an Azure Stack HCI cluster because you want to have for example you mentioned AVD or VDI scenarios virtual desktop infrastructure scenarios and Microsoft has a possibility to put our profiles user profiles into virtual disks so it's called user profile disks and to store those user profile disks that are available on different VDI VMs that you always find your own environment with your profile with all your data in it you have to store it somewhere central and it would be nice if it's not gone if a failure with a server happens so a high available scale out file server for user profiles I do that I have done that often at customer installations so then we have a guest cluster scale out file server let's say we have our Azure Stack HCI cluster here our nodes and then we have VMs on it and this is as of this is our file server one this is our file server VM 2 S2 let's say this is S1 so these are virtual machines running on the hardware and then we have our virtual disks here and we can build a high available scale out file server that is used for user profiles but now we have to we don't want these two VMs running on the same hardware node for that we have affinity rules in Azure Stack HCI but we don't have the time for that but there are reasons why you want to put a guest cluster into a hardware cluster and scale out file server is one of those so this is an example about storage spaces direct here you see we have our storage pools we talked about our storage pools we have a storage software storage bars we have our nodes here we have a high performance Ethernet preferable with RDMA so we can the nodes can communicate between each other leveraging and then we have our virtual machines running on a storage space on our CSVs preferable with RFS and there the VMs the data of the VMs is there so this is quite a lot to take in of course but it is clearer if you if you follow the other the other sessions that are coming up in future modules Hyper B workload model on storage basis direct this a great this as this aggregated this is basically if you have a separate hyper V cluster and a separate scale out file server cluster so we have two clusters and there is another model hyperconverge hyperconverge model if our VMs are directly running in our storage cluster so we have both the virtual machines and the storage in the same cluster that's hyperconverged and the hyperconverged is the model that Azure Stack HCI HCI is hyper converged infrastructure so this model is what we use with Azure Stack HCI so here we have it Azure Stack HCI only uses the hyperconverged model here's storage replica we have another great storage feature and I knew I wouldn't have enough time for all these great features so storage replica in essence these are the wrong pictures here these are my RMA pictures I put it in the wrong slide so this is live of course so storage replica is a possibility if you have one node with your data and you want to have the data in another site because of redundancy disaster recovery features so we can use storage replica we have our VM here it is writing data in our in our volume and with storage replica every change is synced to these other volume so we can have a synchronous replication every write that is done here is moved over and then it's acknowledged and then the VM knows the data is on the other site so we have a synchronous replication but there's also a possibility to have an asynchronous replication if the sites are very far apart let's say 100, 200, 300 kilometers or even miles you can have asynchronous replication so we don't wait for the acknowledgement of the other site it's still an ongoing replication and the synchronous is if the sites are more together maybe on the same campus for example and in Azure Stack HCI we can have a stretch cluster that is a leveraging storage replica to replicate the data on the other site so now I have to speed up we skip the knowledge check you have the knowledge check in the module and we go directly to the last module yeah for the sake of time because there's a lot of good networking stuff to talk about me we could do an entire video just on the storage technologies there's so much to talk about in storage in Azure Stack HCI and windows in general but we sadly don't have the time to do it and in my course I have a 5 day course about Azure Stack HCI it takes me a day for all the storage possibilities even more so let's go to our last module it's software defined networking software defined networking we leverage software defined in Azure Stack HCI there are different kinds of software defined networking there is the whole concept we use in Azure in Azure we have millions of users on the in the same data center on the same networks and you have to separate those users from each other so that they can't communicate the VM of one user can't communicate with another user in smaller environments we use the VLAN technology where you have different VLANs and if you don't know what a VLAN is look it up so it's in essence a possibility to put your create virtual networks on an ethernet so you create virtual ethernet networks and only the users in the same VLAN can communicate with another so you create a kind of isolation and software defined networking that we use that we can use in Azure Stack HCI is this separating on not on the switch levels it's in a software level so what is we have network abstraction with a software defined networking so we don't have separate switches for every user the same ethernet technology but on top of that we encapsulate our ethernet packets from the user again in ethernet packets and we encrypt them so we have really a network abstraction we can leverage network policies we can firewall rules we can say which user can or which VM can communicate which with other VM on a level we can do rules there and so on then we have our network management we can do create everything here with PowerShell but not every IT pro loves PowerShell so we also have the possibility to do software defined networking create the rules there implement software defined networking everything with Windows Admin Center and Microsoft honestly did a great job in Windows Admin Center to address these software defined network parts because they are a bit a bit more complex than the normal ethernet you have additional layers and in Windows Admin Center you can even debug software defined networking so you have the possibility to to filter for packets so that you see the flow of the data SDN used to be really complex to manage and yeah Admin Center did a fantastic job of simplifying that that's so true so very good job so what are the prime and components of software defined networking we have of course Hyper-V network virtualization we need Hyper-V for that we need the Hyper-V switch to add this additional layer and then for encapsulation Microsoft first went with the network virtualization generic encapsulation NVGRE but nowadays you can still leverage that because old implementation of SDN used that but nowadays we use virtual extensible LAN VXLAN it's a kind of industry standard many other vendors also uses or leverage VXLAN so Microsoft decided to go that way and and we in essence we have one more layer to Ethernet and another layer to Ethernet then we need the set switch the set switch is the new Hyper-V switch I always say Microsoft introduced the set switch embedded steaming with Windows Server 2016 so don't use the old Hyper-V switch use the newer set switch and we need an instance a brain who knows which packet is encapsulated in which frame because we have these additional layer with encryption so we need our network controller and to be honest we need more than one network controller because if your network control is gone nobody knows where the packets are so we need a network controller cluster usually 3 to 5 nodes so and this is the primary we could talk more about SDN but we are running out of time so let's come up the session here I skip again the questions there are some great questions here but I run through the slides so let's go to the last slide where are we here last slide let's just skip those summary we talked about a lot in this module we described Hyper-V and its components you did it in essence we described the Azure Stack HCI and its components then I talked about software defined storage a lot to talk about that and the last module was a run through an emphasis on run through software defined networking a lot of info in a short period of time definitely so if you want to learn more here is a link for the summary of the module and there are here again you see where you can learn more and the last slide I have for you the next upcoming session so the next one is introducing to Azure Arc Enabled Service this afternoon our time or in the US it's in the morning most suitable for you Andy right it is it is yes 1130 Pacific time so yeah that would be evening for you in the EU I believe so definitely great session there with Azure Arc Azure Arc is super cool so if you check that one I would definitely highly suggest you do more information on the QR codes there so well with that Carson I think that's about it for us we're wrapped up so thanks everyone for watching and thanks Carson for being on and we hope to catch you again sometime soon Andy sends it off and bye