 Hello and good morning. Good evening. Good night wherever you are. Welcome to today's live TV session. Greetings from Germany. So you have two hosts today and we would like to introduce ourselves first. So I'm Sven Langfeld. I'm from Germany, Microsoft, and I'm the Senior Azure Stack HCI Commercial Sales Specialist for Germany, Austria and Switzerland. And here you can see my email address, my LinkedIn account and my YouTube channel. So I invite you to follow me, to get in contact with me directly whenever you want to, especially on LinkedIn and YouTube. I'm posting very nice content about all Microsoft solutions and especially for Azure Stack HCI as well. So I invite you to follow me. And yeah, I have with me Manfred Helber. Please invite, introduce yourself, Manfred. Yeah, hello to the audience. I'm also based in Germany. So different city than Sven. My name is Manfred Helber. I'm a Microsoft Most Valuable Professional in the Category Cloud and Data Center. Category Cloud and Data Center is from a historical view. The topics about clustering, storage, Hyper-V. Now we have also the Azure Stack HCI operating system where we are focused on today. I'm also mainly focused with my social activities on LinkedIn. Feel free to connect with me there. And also on YouTube, where I'm publishing a lot of content about Azure Stack HCI and Storage Spaces Direct. And we have a third one from Germany in our call today. Not visible for you, but he's behind the scene and managing your questions. So we invite you to ask questions over the chat and Flo Fox is answering those questions. And some of those questions are coming directly into our studio and we are happy to answer them live today. So let's start with our session. What is it about? It's about plan and deploy Azure Stack HCI. And this is a session you have on the Learn Live website as well. And today we want to run you through the content. We have some additional content to make it more practical, to make it more live. And of course, if you go to the website, you find more or less the same content here. We have some questions as well today. We want to ask you not all of the questions on the website will be in our show today, but some of them. So hopefully we have a lot of fun together. So let's have a look at the topics we are going to talk about. So first, we start with plan for Azure Stack HCI, then we will switch over to deploy, validate deployment. And at the end, we're talking about integrating Azure Stack HCI with Azure. And I already mentioned that we will have some knowledge checks, some questions we are going to ask you. So please prepare yourself. Now go on aka.ms slash pols or use the QR code. At the moment, you don't see any questions. But when we ask the question, you will see them. Then you have some time to answer. And then we will see how good your knowledge is so far. So Manfred, are you ready for today's session? I'm ready for the session. And I'm looking forward, especially the live demos we added to the content to ensure that you see all we are talking about also in some demo steps we have prepared for you. Okay, so let's start with the question, what are use cases for Azure Stack HCI? So talking about use cases, I got a lot of questions. What are the customers we should sell Azure Stack HCI for? An Azure Stack HCI is a hyperconverged infrastructure. So hyperconverged infrastructure is a modern way to deploy infrastructure, to bring infrastructure into data centers. It's not the typical three tier infrastructure, including separate network, compute and storage. Everything is included in one HCI solution. And so if we have customers using Azure cluster today, Azure Stack HCI could be good for them, but there are some more use cases and we are going to talk about them today. Important to know is that Azure Stack HCI is an operating system. And on that operating system, you can run guest operating systems like Windows Server, like Linux based, and so on. And yeah, this is what we are going to talk about today. And we will see what type of hardware we need to deploy Azure Stack HCI. We will have some validated nodes. We have some integrated nodes we are going to talk about. And this is what we want to start right now with a short live demo, where you can find the perfect match for your infrastructure from the hardware perspective, and to find out how much Azure Stack HCI you need. So we have a catalog and we have a Cizer and Manfred already prepared our first short live demo, and he will show you how this tool is working. Yes, when you enter the website, you can find it on microsoft.com slash HCI. You will find all the information about Azure Stack HCI, the operating system itself. You can download the free trial and you can find the Azure Stack HCI partners there. When you click on the find Azure Stack HCI partners, there are two options for you. One option is to decide for a dedicated solution when you already know which partner you are preferring. So if you browse the catalog here, and I will open the catalog in a new tab, you will see a list of all the systems that are available. And the important thing is to see we have more than 400 solutions available in the catalog. And in the catalog, we have the differentiation between integrated systems and validated nodes. And we have seen this on the slide as Sven presented to us, where Microsoft recommends the integrated systems. We also can read here the system, the integrated systems provide some additional value. The validated nodes are also suitable to execute Azure Stack HCI. So we don't have differences regarding the performance or regarding the stability, but with the integrated systems, we have more tools to manage these systems out of Windows Admin Center. So with the integrated systems, we can, for example, deploy the firmware updates, we can deploy the driver updates within the update progress. So when you go for a planning here with the Azure Stack HCI Sizer, you have the advantage that you really can plan your specific project. So if we create a new project here, so for example, my project seven here, I can select for the hardware I will use, I can decide if I want to choose an integrated system or a validated node configuration, then I can select for the CPU family I want to use Intel or AMD, I can select for the solution builder. If you're wondering why some of them are grayed out, actually, I selected the integrated systems. And if I select for the integrated systems here, I will only see the integrated system partners on the right hand side. If I check for validated nodes, I will also see the validated nodes partners. Then I can select for the availability level. So the redundancy I need, I can decide for the storage type. So for example, full flash or hybrid, we will talk later about the different storage options when we go to the planning module. And we can also add our planning for the future growth. And now I don't want to highlight a specific window, but I have to select one here because the size is only usable. If we select one hardware vendor here, I will select any storage type. And in the next step, we can configure our workload. And this is the great thing about the size because the workload here we can specify the size and the configuration of our VMs. So how many VMs do we have? What is the CPU configuration? What is the memory configuration? And we also have the virtual to physical ratio we want to use for them. So let's, for example, take 10 of these virtual machines, so the general virtual machines, and let another workload. So for example, SQL server, let's say we have two of the SQL server VMs, and we can add additional workload like file servers and whatever we have. Maybe we have some exchange workload here or VDI workload. Maybe you have 10 of these VDI virtual machines. And the result of the size will be a recommendation for a specific configuration that fits our workload. So the advantage is that you will get an result here where you can see how many resources are used for your workload, what is planned for high availability, what is reserved for future growth. And we have this information for the CPU, for the memory and the storage. This tool is relatively new. It was published a few weeks a month ago. And this supports you in selecting the perfect solution for your planning for Azure Stack HCI. And most of the OEMs have their own size as well. So they are going a little bit more into the details related to their hardware, their configurations, and they make recommendations. What amount of cables do you need and so on and so on. So please ask your OEM about their specific size. And it's an additional tool to find the perfect match for your data center. So let's get back to the presentation. And then we will talk about the use cases. And you can see some icons here on the screen. Those icons you can find as well if you're choosing your perfect hardware. Because these icons are used by the OEM to say, okay, my hardware configuration A is perfect for one or more of those use cases. And we will see the branch office and edge. We will see virtual desktop infrastructure. We have high performance SQL Server, trusted enterprise virtualization, and Kubernetes. And we will go through all those five topics right now to explain what is behind this. But if you see one or more of those icons, those patches, you know, this hardware is recommended by the OEM to be used in these scenarios. Before we go into the details of those four types, I recommend to download white paper. So Microsoft and Intel worked on a very, very good white paper where they have gone through all those use cases and defined the five most relevant ones and described exactly what the use case is about how it worked on Azure Stack HCI. So it's a very good one. So here you can see the Bitly link to get access to one of my LinkedIn posts. And there I posted the document to be downloaded as well. So let's start with branch office and edge. So what does this mean? So Azure Stack HCI is something we sell very often at the moment in retail and manufacturing space, for example. And retailers want to have some kind of compute in their branches. And we have manufacturers who need powerful and reliable infrastructure in their machines and so on. And this is what branch office and the edge is about. Manfred, do you have any insights, any feedback about the branch office and edge? Yeah, the branch office and edge scenario is also in the focus of our hardware partners, because several of them have designed specific solutions because the demand is there in the market. And we have a lot of mechanisms in our management tools. So the Windows Admin Center already mentioned and also in the Azure Arc integration that are related to this branch office scenarios. So we have optimized workflows to manage these workloads at a scale. Yeah. Yeah, and at a scale is a good message to the next slide because Azure Stack HCI is very scalable and it starts small. You can start with a two node cluster. You only need four SSDs per node. And then you have your start into the Azure Stack HCI experience. And yeah, this is something we sell very often, don't we, Manfred? Yes. As you mentioned, this is the entry level, the solution regarding the sizing, but not regarding the performance. Even if in this two node configuration, we have full redundancy, we can use the nested resiliency. That means that the data is mirrored across the servers and also inside the servers. We don't use rate controllers there. We will talk about this later in the planning module. But at this point already, the information, the disks are presented directly to the operating system, full ready redundancy in this two node configuration. High performance. What we need additional is some active directory infrastructure, but usually the customers have this. And this small scenario is perfect for the smaller customers. We especially have here in Germany, we have many, many small companies that are running the hypervisor infrastructure on a cluster level or cluster scale. And this is perfect for these small and medium sized companies. And also for the companies you manage that have many branches. So the retailers, the retail markets that can run the small clusters in their branch offices and a huge Azure Stack HCI cluster in the central site. So let's jump to the next one. So virtual desktop infrastructure. Of course, an Azure Stack HCI, we can run classical VDIs and RDS environments as on typical Windows Server in the past as well. But Azure Stack HCI brings additional benefits because Azure Virtual Desktop. So a solution that is only available on Azure right now is in private preview for Azure Stack HCI. And it will be available some time this year. And then you are able to run your Azure Virtual Desktop on premises in your own data center. Manfred, you added this picture to the slide. Is that something you want to add? Yeah, the great thing about Azure Virtual Desktop is that we have the traditional infrastructure we know from traditional RDS deployments. It's in the middle of this picture where we can see Azure Virtual Desktop Microsoft managed. It's the web access, the broker, the gateway. So this is hosted in Azure. If we use Azure Virtual Desktop in Azure or an Azure Stack HCI. But on the right hand side, we can see that our desktops can provide it by Azure Stack HCI. So on our hyperconverged infrastructure provided on our servers on premises. And the great thing about Azure Virtual Desktop is that this is the only technology where we can use Windows 10 or Windows 11 multi user or multi session. Usually an RDS host or terminal server is a server based solution. When you deploy a traditional virtual desktop infrastructure environment, you have dedicated VMs for each user. And with Windows 10 and Windows 11 multi user, you can have a Windows client with the full Windows client experience, the full Windows client office support, and have several isolated sessions on this client host. And this is unique to Azure Virtual Desktop. And as you mentioned, this will be available in the near future also on Azure Stack HCI. Yeah, thank you Manfred. The next topic is about SQL Server. So if you run an SQL SQL server on Windows or on Linux, both is possible because we have two different versions of SQL Server. Then you can do it in Azure Stack HCI or you can do it outside of Azure Stack HCI. So from the performance perspective, it does not make a huge difference right now. But Manfred, what are the benefits of running a SQL Server on Azure Stack HCI? Yeah, we talked a little bit about the HCI part in Azure Stack HCI, the hyperconverged infrastructure. But we also have Azure in Azure Stack HCI. So Azure Stack HCI behaves like Azure, a workload sees Azure on Azure Stack HCI. And this opens all the hybrid scenarios for a SQL Server where we can run parts of our workloads of SQL Server inside Azure. So let's imagine, for example, you want to use things like data analysis services in Azure. Then you can put this data analysis services into Azure based on SQL Server and your traditional database workload remains on premises. And with the platform Azure Stack HCI, you are able to run all these Azure benefits not only for SQL Server, but also for SQL Server on your own premises infrastructure. And what we can see in the node, as you know, SQL Server is not only available for Windows Server operating systems, it's also available for Linux operating systems. And Sven mentioned it a few minutes ago that for sure, Azure Stack HCI is also able to run VMs with Linux operating system. Exactly. And the next topic is about trusted enterprise virtualization. So we see on the screen some things like VBS, the virtualized base security, or HVCI. So if we look at Hyper-V in Azure Stack HCI or outside of Azure Stack HCI on a classic Windows Server, right now we have very similar features. But to understand why Azure Stack HCI is especially interesting for trusted enterprise virtualization about secured core server and technology of the future, it makes sense to have a few about at the strategy of both operating systems we have at Microsoft. So we have Windows Server on one side and we have Azure Stack HCI on the other side. Both solutions include storage spaces direct on Windows Server only in the data center edition. Both solutions include Hyper-V. But if you look at the details of that slide, you can see that Windows Server will be developed into the direction to be the perfect guest operating system. It's all about performance running applications on that guest OS. Azure Stack HCI is the operating system with a strong focus on being the best virtualization platform today and in future. So over the next quarters and years you will see a gap between those two operating systems. And Azure Stack HCI absolutely will be the perfect platform for everything that has to do with security, with virtualization, and things like that. And the last one is about Kubernetes and Manfred, you added a picture as well. Is there something you want to say about AKS? Yeah, Kubernetes Services. We have the A in the name. It's the Azure Kubernetes Services and I think many of the attendees of this live stream know Kubernetes Services in Azure. And I mentioned this before with SQL Server and Azure Virtual Desktop. The great thing about Azure Stack HCI is that we can have several Azure workloads. We know natively from Azure and put this workload onto our on-premises hardware. So we have advantages like reduced latency, access via our local network. But similar or identical management tools we know from Azure scalability based on our hardware we provide in our environment. And this is exactly what Azure Kubernetes Services on Azure Stack HCI provides. And important is to know we don't need a dedicated cluster for each of these workloads we introduced. You can put all these workloads on a single cluster if you need all these workloads in a smaller scale. And if you have these workloads in a larger scale, then you can decide to have an Azure Stack HCI cluster for Kubernetes, another one for VDI, another one for SQL Server. So it depends on your needs, how you use this, but all these workloads fit together on one cluster. And Sven mentioned in the catalog, you can select for these workloads and see which solution fits best. And you will find several solutions in the Azure Stack HCI catalog that can be used for all of the workloads. Yeah, perfect. So we are running a little bit out of time. So that's why we don't spend so much time on this slide. But we want to show that Azure Stack HCI is very scalable. We already said it's starting at two nodes, but it goes up to 60 servers per cluster and 4000 terabyte storage capacity. So very, very scalable solution. So let's talk about plan for Azure Stack HCI workloads. Manfred, how about when we plan for you mentioned already, we can start with minimum two servers, we can scale up to 16 servers. So it depends on your needs. So typically, the fault domain is the server. So the fault domain means what is the entity that is used to build the redundancy. So the default is to build the redundancy across the servers. If you need more resources, you can have several of these clusters with up to 16 nodes. Maybe you also decide to run several four or five or six node clusters. So these are the scalability terms that you can work on. The number of CPUs and CPU cores is relevant because Azure Stack HCI is charged per CPU core per month. So it depends on the type of CPU you decide for and the number of cores, what your costs are, what's important to know. You can disable cores on your physical hardware because you're only charged for physical cores that are enabled. So you can plan for the future. You have seen this in my planning in the Azure Stack HCI Sizer. There I had to reserve for future growth. If I disable these cores, I'm not charged for this. Regarding the storage and the memory, we can use traditional memory, but we can also use this latest technology, the persistent memory modules where we have high performance. We can use this persistent memory modules in a storage and in a memory mode. It depends on your scenario or which you decide. And with persistent memory, you can build high-performance cost-optimized solutions. In the next slide, we will see the specific requirements regarding the disk drives. Here we are focused on NVMe's SSDs and hard disks. I mentioned on the previous slide, we can also use persistent memory drives. Important is to understand that all the nodes have to be configured identical. So in the picture on the slide on the lower right-hand side, we have an or between this configuration. This means we have systems with NVMe only, and then all the systems in the cluster have NVMe only, or we have a mix of NVMe's and SSDs, but then all of these systems have to have exactly this mix. The technology in Azure Stack HCI, the storage space is direct, decides automatically based on the drive type, which drives are used for cache, and which drives are used for capacity for storing the data. So if we are using, for example, NVMe's and SSDs, the NVMe's will be used for cache and the SSDs for capacity. If we are using SSDs and hard disk drives, then the SSDs will be used for cache and the hard disk drives for capacity. And if we are using NVMe only or SSD only, then we don't have a cache because we have already a capacity tier with high performance throughput, even for random streams, for random data, when random data is written and random data is read from this storage here. What's not possible to have a hard disk-based only configuration, but the other typical configurations we can see here on the slide. The number of drives has to be identical in each server. The type of drives has to be identical in each server. We can start with four flesh-only drives, and we can scale up to 26 drives typically in standard servers, but there are only specific solutions in the market, also specific solutions in the market where we have more than 26 drives in each server. So important what we have on this slide again. All servers have the same drive types. All the servers have the same number of drives. All drives have the same model firmware version. All drives have the same size. So when you plan a solution, you should work on this. If you replace drives, it might happen that you will not find an identical replacement for your drive. Let's imagine a hard disk drive fails, and the newer hard disk drive is a newer model, and it's a larger drive size than the previous model, then you can replace this. You can replace an existing hard disk drive with a newer, larger model. You can replace the existing NVMe with a newer model with a larger size, not with a smaller size. This is not possible. For sure, this additional capacity maybe cannot be used. It's a standard capacity then, but you can use these drives. The recommendation is to try to get a replacement with the identical drive. What's not possible to replace a hard disk drive with an SSD. You can do this in the planning phase. When you are planning your Azure Stack HCI cluster, you can design your storage, but if you have designed your storage, you can add drives, but you cannot replace a drive with a different type of drive. You cannot replace NVMe with an SSD, and you cannot replace a hard disk with an NVMe. You have also ensured that you have the same number and types of drives in each server. Yeah, and if you plan your Azure Stack HCI environment, it's very important to understand how storage spaces direct work. We are talking about cluster quarrel. We are talking about witness and so on. On this page, Manfred added some pictures to make it a little bit more understandable. On the left side, you see a classic two-node solution. You see one two-node solution with and one without a witness. You can see that there are some votes. If you have cluster with an even amount of nodes, you can see that Azure Stack HCI or storage spaces direct only gives an uneven amount of votes. Why is this the case? Because if you have a two-node cluster without a witness and everyone, every node would have a vote and one is down, then you would have one vote left. One out of two is not more than 50 percent, but you need more than 50 percent to keep the cluster running. That's why we integrate a witness. A witness has an additional vote. You have three votes in your cluster. If one server is going down, you have two votes left and two votes is the majority. That's why it's always important to have a witness in a cluster solutions up to four nodes. If you have five or more, then a witness is not needed any longer. Manfred, something you want to add? Yeah, also in a larger cluster, we should have this witness. When we are thinking about the stretch cluster scenario, for example, we will not go deep into this in this module, but if we stretch off the nodes to different locations, and this is possible with the stretch cluster feature in Azure Stack HCI, we have also to ensure that we have a witness that ensures that we have a clear decision how the failover should occur. You are right. The relevance of the witness changes a little bit, but usually we configure a witness because the cluster feature in Azure Stack HCI and also Windows Server is able to build a dynamic quorum, to build a dynamic set of the votes, to build a majority, to build the quorum. We had seen this on the slide in the three-node cluster. If the three nodes are up and running, the witness has not a vote. If something changes, if a node fails, for example, the witness will receive a vote. This is the same situation in a four- or five-node cluster, so the recommendation is to always configure a witness. A witness can be a file share witness or a cloud-based witness. What's important, where I often see misconfigurations in the field, the witness itself is not high available. Please do not put your witness on a DFS. I often have seen this. Please never do this. Don't put your witness in the syswall folder. Don't put your witness on a high available storage. A witness is a single file share or a witness is a cloud-based witness in Azure. This is the best choice if you don't want to care about how to configure or optimize your witness. Regarding the pool quorum, we have the situation that we have the majority of the node votes. We need the majority of the node votes in the cluster to stay up and running. We also need the pool quorum. We need the majority of the disk drives to keep the pool online. Depending on the number of nodes and disks you have, you can calculate how many simultaneous failures you can survive to keep the pool quorum up and running. There's an important additional thing. The pool resource owner is one server in the cluster. This server has an additional vote, but this pool resource owner can change. If a server fails and this was the pool resource owner, then another server will get the pool resource owner. The pool resource owner is a little bit like the witness for the pool quorum for the majority of disk drives. We have a special situation in a two-node cluster because if a two-node cluster fails, then we lose, no, sorry, if one node in a two-node cluster fails, then we have lost 50% of our drives. Regarding the situation, the remaining node is the pool owner. We have 50% of the disk drives. We still have the majority of the drives. In this remaining node, an additional disk can fail if we have configured the nested resiliency. There, the cluster switches into something like a single node mode where we can survive additional disk failures, where we violate the pool quorum, but the pool stays online. This is a special situation in the two-node configuration, starting with 3, 4, 5, and so on nodes. We always have the situation that we need minimum 50% of the disk drives online to keep the pool online. So Manfred, what do we have to care about if we talk about networking? Yeah, networking is an important topic because networking is outside the servers. So for sure we have the network interface card inside the servers, but we have usually some switches where these network interface cards are connected. Sometimes we are also working switch less in two, three, four-node configurations. It's perfect possible to have the storage traffic directly connected. And when it comes to the switches, we have to care about the technical feature set of the switches when we are using Rocky, the RDMA-overconverged Ethernet, because the recommendation from Microsoft is when you read through the docs articles about Azure Stack HCI that you should use RDMA-capable network adapters. And there are two options for RDMA. We can use the RDMA-overconverged Ethernet, or we can use the IVORP implementation. Actually, Microsoft recommends to use the IVORP, not because it has some real technical advantages, but IVORP is a little bit, let's say, more easy to configure. It's easier to handle the IVORP configuration because if you don't have configured VLANs, if you don't have configured traffic prioritization, IVORP will still work with a very good performance. In Rocky, we have a lossless Ethernet. And to configure a lossless Ethernet, you need the priority, the traffic prioritization, the prioritization of packages. And this requires that we have a data center bridging-capable switch. And you have to check this regarding the switches. There are some recommendations in the docs at Microsoft for switches, but there are additional switches in the market that are capable of this. So you should have a detailed look at this and reach out to your preferred hardware vendor to discuss with this preferred hardware vendor for which you should decide. For sure, we have to ensure that we have to do enough bandwidth. So the recommendation is to have minimum to 10 gigabit ports. We can use additional ports. So typically today, if you are thinking about NVMe only systems, I see more and more systems in the market that are using 100 gigabit network interface card ports. And so you can also build these high-performance configurations with two or more nodes. When you think about stretch clusters, if you put your cluster nodes in different sites or different locations, it's important to ensure that we have maximum 5 milliseconds round-trip latency to have a synchronous deployment for these stretch clusters. We have synchronous deployments and we have asynchronous deployments for synchronous stretch cluster, maximum 5 milliseconds round-trip latency. Okay, so let's start with our knowledge check. As I mentioned at the beginning, we don't have all questions included in our live session today. So if you go to the website, you will have more of those questions if you like to do them. But today we will start with question three. So please go to the website I mentioned at the beginning aka.ms slash polls or use the QR code and then take your time to answer the first question. So question number one is what is the minimum numbers of servers in an Azure Stack HCI cluster? Is it A1 or B2 or C4? We talked about this minimum numbers, we talked about the maximum numbers and I already mentioned we need some additional infrastructure. So here the question is what is the minimum number of servers in the cluster? I mentioned we need additionally an active directory domain. Usually this active directory domain is provided by one or more domain controllers in your domain. So usually we should have minimum two of these domain controllers. If we are using a file share witness, then we have the situation that we need a server in the cluster that provides the file share witness. When we are running a cloud-based witness, we can provide this witness by Azure. So we will need additional infrastructure. But here we have the question about the number of servers in the cluster. Yeah, our moderator Flo asked us to give you a kind of preview. So that's why we want to close the question now and then we are talking about the preview. I thought this information, I thought this information Flo has in the chat is still under NDA, but if he is asking for it. I think so, I think so. But the correct answer for the moment is two. So two notes is the minimum requirement we have right now. But what Flo mentioned in our private chat is that we are thinking about making it a little bit smaller. So maybe we can have one day, one note solution. That's nothing that we have right now. This is nothing that has been announced. But this is absolutely the strategy to make it smaller and lighter. So we will see what's coming in future with Azure Stack HCI. You will see a lot of new features coming over the next 12 months. So that's what makes Azure Stack HCI so powerful that we are working on improvements, making it better and have additional solutions over the time. So let's look at the second question. So what is the maximum round trip latency important for Azure Stack HCI with a stretch cluster solution? So is it one, five or 250 milliseconds? So what is your answer on that question? We talked about the stretch cluster and the question is here about the active, active configuration. I think this is what for most customers is an interesting configuration to have both sites active, to have a workload on both sites and to have the redundancy to survive a complete site failure where we have maybe more than two nodes because as we have learned in the previous module and the slides, we can survive only two simultaneous failures in a traditional Azure Stack HCI cluster because we have to keep the node maturity, we have to keep the disk maturity and we have to ensure that we don't violate the redundancy level and the redundancy level starting with three nodes can be a three-way mirror that we can have. We can survive two simultaneous storage failures in a stretch cluster, we can survive more. There's the question to have the active, active configuration. So the right answer to that question is five milliseconds and I have already seen your answer. So the first question has been answered correctly by everyone. Congratulations. And the second one, but almost everyone. So we are on a very good way. Thank you for participating. We will have some more questions later this morning. So let's talk about deployment of Azure Stack HCI. So now we are not any longer in the theoretical phase. Now it gets very practical and before we go into the live materials, we want to share something Manfred is providing on his and on my YouTube channel. So Manfred recorded around about 30 hours of content, live demos, hands-on, how to do it step by step to deploy Azure Stack HCI in the deployment specialist section and he's talking about how to sell it for the technical sales specialists. Here we will go through all the details about the the quorum, the storage pools and the witness and how it works. So you will learn exactly about all the details and hybrid administrators are also part of this training. Here we have a bit.ly link. It's bit.ly slash Azure Stack HCI AS HCI hands-on and then you get directly to a linked in post of mine, where I shared the full stack of detailed agenda. So if you're interested, please use this for much, much more detailed information. But today we want to talk about deployment as well. So Manfred, your part. Yeah, so we will have a look at the deployment of Azure Stack HCI in some live steps. Before we start with this, the important thing is if you decide for an integrated system, usually you have Azure Stack HCI pre-installed. You don't have it pre-configured, but you can exactly start with the step I will show you in the next live demo. So the Azure Stack HCI will be pre-installed. And if you want to configure the Azure Stack HCI, we talked about the selection and the choice of the systems based on the Azure Stack HCI catalog. And to create the cluster, we have to join this Azure Stack HCI nodes to an active directory domain. Today, this is always an on-premises active directory domain. And we can run the deployment within the Windows Admin Center. The Windows Admin Center is a web-based management tool. It's a web-based management interface. And this is installed locally. Actually, there's a public preview where you can also run Azure, the Windows Admin Center in Azure, inside Azure. But usually, today in the production environment, you will run the Windows Admin Center in servers on your on-premises. So let's have a look at the deployment process in the Windows Admin Center in a short live demo. We will not be able to cover all the steps. What we here see is a Windows Admin Center screen. And I've already several Azure Stack HCI clusters there. I have already several servers there. And on the upper left-hand corner, there's an Add button. We can add different things here. We can add servers. We can add Windows PCs. We can add server clusters. We can add Azure VMs. And we want to create a new server cluster here with this wizard. This is exactly where we decide for creating a new cluster based on Windows Server. This is a possible choice. But we will decide for Azure Stack HCI. So when we switch to Azure Stack HCI, we have the choice between all servers in one side and servers in two side. This will create a stretch cluster. If we have all servers in one side, we can start with two nodes here. And if we start with a stretch cluster, we need minimum four nodes. This is important to know. We talked about this minimum requirements. And we already talked about the maximum scale in the all servers in one side. We can have up to 16 nodes in servers in two sides. We need minimum four nodes in each side. So when I click on create, the wizard itself starts. And as I mentioned, we will not be able to go through all the steps. What you can see here, we have five different deployment sections. The first one is get started, where we check the prerequisites, where we add the servers. The wizard not only deploys, it also checks. It checks if the servers are domain joined. If not, it joins them to the domain. It checks if the features are installed, the Hyper-V role, the failover clustering, the data center bridging, the data deduplication. Yes, the data deduplication is available for volumes in Azure Stack HCI. This is checked here. We have a check if all the updates are installed. Here we have a step where we also can install hardware updates if we have an integrated system. So step 1.5 is installation of Windows updates for Azure Stack HCI or Azure Stack HCI updates for the Azure Stack HCI OS. Step 1.6 is only available for integrated systems, where we can also have the firmware updates. And this is a great advantage because we have firmware for SSDs, we have firmware for NVMEs, we have firmware for our host bus adapters. You can do this manually on every certified Azure Stack HCI system in the integrated system. The requirement for the hardware OEM is to provide these updates here within this prerequisites process. The servers are restarted. At what points do we have to register the Azure Stack HCI to Azure? Here, at step 6, this is when the cluster has completely been configured. We will have a look at the registration in a few minutes. So I will open a pre-configured cluster where we can see how to register this cluster in Azure. In this deployment wizard, we don't have to register the cluster because the important thing is that we are charged based on the course per month, but we have 60 days of free trial of Azure Stack HCI. So when we are thinking about a typical deployment project, then typically a deployment partner will prepare the system. The OEM installed the Azure Stack HCI OS on our integrated system nodes, then we will receive the nodes, we will bring the nodes into the REC infrastructure at the customer side or place the tower servers into the server room, then we will do the cabling, we will deploy the Azure Stack HCI cluster via this wizard. But in this time where we will need several hours or maybe days if we add some pre-checks, we don't want to be charged for this and we don't want to reduce the amount of 60 days where we can run the cluster without being charged. This is the reason why this registration process in Azure is not part of the deployment, but it's an additional step afterwards. Okay, but you have 13 days to register your cluster after deployment and then you have the 60 days for free, so it's not charged, but in between the 30 days you have to register your cluster. Yes, the 30 days are where you have registered the cluster and if we don't register the cluster we have some limitations, we will see this in two minutes. So we have these five steps, get started, here we have the network configuration including RDMA, we have the clustering, we mentioned this, each Azure Stack HCI cluster is also a cluster configuration, similar to what we know from Windows Server clusters, then we have the storage, we talked about the disk layouts. The SDN configuration is optional, it's the last step, so if you are planning without very fine networking you can configure it here in the wizard, but it adds some additional complexity, so you should evaluate if you really need software defined networking, or if in your environment where you're maybe only running virtual machines for a single site, you maybe will not need the SDN. So this takes about one or one and a half hour to finish this wizard here. When we have a look at a deployed cluster, then we have this view here, this cluster was deployed by myself today in the morning, I started at something like 8 a.m. and deployed this cluster for the show today, and here we can see in the dashboard, there are no alerts, everything is fine, but this cluster is actually not yet registered with Azure, so but I can work in this cluster, I can create volumes, so this is possible, the cluster is up and running, I can create this volumes in this cluster, so we will not go into detail about the volume creation process, I only will show you that this works, so new volumes are no problem, I can work on the drives, I can work on the updates, maybe I have to install updates, and now when I finished with my pre-configuration of the cluster, it's important to know that I cannot put any workload onto this cluster except the volumes, maybe you see the volumes as a workload, this can be here, I created the volume one, when it comes for example to virtual machines or to Azure Kubernetes services, I will realize that this is not possible in this state here, so when I try to add virtual machines, I will receive an error message that says actually this is not possible, and this is not possible because I didn't register this cluster in Azure, so to put some workload on this cluster, I have to register the cluster in Azure, this can be done in the Windows admin center by clicking on this register link, I select for my Azure subscription ID I want to use, I can use an existing resource group or create a new, I can use an Learn Live Azure Stack HCI Group 01, something like this, I want to choose for West Europe here in the list, and in advance I can see that I have always an onboarding of the machines to Azure Arc, we'll talk a little bit about Azure Arc later in this module, so when I register this cluster, I can see this takes a while, we will not be able to wait for this, but I have a prepared cluster that is registered for Azure, this one here is already connected, cluster 60 is actually registering, this one here is already registered, there we can see now I have a connection to Azure, when I go to the settings on the lower left hand corner, you can actually not see this because our pictures are in front of this, but in the Windows admin center on the lower left hand corner, you have these settings below the tools, and when you click on this, you will receive the settings for the cluster where you can configure the storage and so on, and we have also the Azure Stack HCI part, and in the Azure Stack HCI part, we can review the Azure Stack HCI registration, and when we click on the Azure Stack HCI registration, we'll see okay, this cluster is registered with Azure, Azure Arc is enabled, and the important thing is it's also connected, the last connection was today at 3 a.m. in the morning, this means starting from today for the next 30 days, I have the full Azure Stack HCI functionality, if I lose the connectivity to Azure because of some reason, maybe some interruption of my internet connectivity, my full, my complete existing workload will remain up and running if I lose the connectivity more than 30 days, my virtual machines are still up and running, my volumes are accessible, I don't have a limitation on my existing workload, but starting with this point in time where I have a disconnection of the Azure Stack HCI more than 30 days, I will not be able to put additional workload like virtual machines or cloud containers to this cluster. Perfect, it was very good to see this small demo and to give you some more time, I skipped the next question in our knowledge check, but to give you an idea what the question would be about, it was how many times, how many days do you have to register your Azure Stack HCI cluster in Azure, and I think everyone knows it's 30 days, and how much time do you have it for free, your Azure Stack HCI, this is not a question, but it's something that's very important, you have 60 days for free, no matter if you use these 60 days for testing or for life being live, first 60 days are absolutely for free. So let's go to the next section of today's training, very date deployment of Azure Stack HCI, so Madri, what do you have prepared for that section? Important is I deployed an Azure Stack HCI cluster now, and maybe you could think about now bringing your workload, your production workload to this cluster. It's possible from the technical perspective, but the recommendation is to do some testing before you start to production, because you must imagine you have an hardware, you should decide for a hardware from an Azure Stack HCI catalog, and this hardware is validated by your preferred OEM, by your preferred server manufacturer, and this validation, it's a standardized validation process based on the private cloud simulator provided by Microsoft, so the hardware should be absolutely good when you receive the hardware, but your implementation you decide for is specific for your scenario. It depends on your environment, which cables you are using, how long the cables are, for which switch you decided, which patch level you have for Azure Stack HCI, what's your capacity you decided for, how much memory you have, how many virtual machines you will bring to this cluster, and there are great tools available you can use for validating your environment. We have seen if you use the deployment wizard, and I mentioned this, then we will have several validation steps already within this deployment wizard, so each step, configuring the nodes, configuring the network, configuring the storage, will let to an validation of what you configured there, so we already have a very powerful reports that are created there, but you should additionally maybe do some checks in the network, so there's a tool, it's for validating the data center bridging, the validated DCP, and to ensure that we have all the functionality tested in your specific environment, so the first thing is the cluster configuration validation, this is done in the deployment process, this is what you can read on this slide, then we have in each step a validation and an additional validation report, this is a very powerful, very intense report about everything in your cluster, including the nodes, including the firmware level, the BIOS level, the firmwares of your drives and so on. When we go to the next slide, we can find information about the cluster testing, and we have the information that we can use for some of these steps, the traditional failover clustering tools, I would recommend to you to always use the Windows Admin Center, or additionally, or as an option you can use the PowerShell, everything we are using in the Windows Admin Center is also available via PowerShell to enable you if you have a larger scale and you maybe don't want to click through a wizard, or if you want to replace specific configuration, you could use the PowerShell instead. I would not use the traditional tools because we don't have all the features in there, but as you can read in the learn module, it's still possible, so failover cluster manager, including the validation wizard, is still available here. If you add servers to your cluster, then you need to revalidate your environment, so there are specific scenarios, they are all documented, where you have to rerun a validation, and this is possible in Windows Admin Center, or failover cluster manager, or in the PowerShell to ensure that your updated cluster configuration is again fully supported. Manfred, we have a question in the chat. Oh, I didn't see this. Yeah, can we monitor de-duplication and compression rate from the Windows Admin Center? Yes. In the Windows Admin Center, when we go to the volume level, we can decide to configure the data de-duplication for each volume, so it's a feature, the de-duplication that installed by default in the wizard, and then we can decide on the volume level if the data de-duplication is enabled or not, and we can see the initial data amount that was written to the drive, and we can see the de-duplicated data amount on the drive, so yes, and also the de-duplication rate is shown in the Windows Admin Center, and also in the server manager if you want to use the traditional tool. Perfect. So let's go to the last slide in the section, and then we will have an additional test. Yeah. For, I mentioned, you should ensure that everything works in your specific configuration. There's a great tool from Microsoft. Maybe you have heard about VM Fleet. VM Fleet was updated last year to VM Fleet 2.0. If you work previously with VM Fleet, this new VM Fleet is much easier than the previous version to deploy. The idea of VM Fleet is to bring a specific workload to your cluster to see how the cluster acts if it has some virtual machines on there that are running a specific workload based on a workload template inside the VM. This means you are not only configuring your cluster and trusting in that everything will run, but you will put an amount of VMs that is similar to your planned production workload. So if you plan for 40 VMs in your cluster, you will use VM Fleet to deploy 40 VMs and you can simulate a workload inside the VM Fleet. You can have the first result to see does this work with my planned workload on a specific hardware, and you have some baseline where you can see how powerful is my specific cluster implementation. So the recommendation for production environment is to not only use VM Fleet to simulate your planned workload, but to also simulate what is the maximum workload you can put on this cluster to be prepared if your number of VMs increases, if the number of your containers increases, maybe you need additional volumes and capacity to see when you will reach the maximum IO rate, the maximum throughput you can realize on this cluster. Perfect. So now again, it's your turn. Now you have to check your knowledge and again, please use the same web page or the same QR code as before and we give you some time to answer the question about what tool can you use to validate cluster performance by using synthetic workloads. Is it validate DCB? Is it test cluster or is it VM Fleet? Yeah, Manfred, anything to add until we get all the answers? All the tools are valid tools and all the tools should be used, but only one of the tools is for validating the cluster performance and the important thing is I think the synthetic workload because this is the only thing how you could simulate your real life scenario and really till we wait for the results of the poll, I absolutely would recommend to you to invest some time in testing your cluster because in the field, I see the clusters that are validated and tested. Usually there are clusters where we don't see any issues because we already test this in the deployment phase and the clusters that are deployed without testing usually are the clusters where we then see, oh, we didn't plan for this workload and things like this. So please spend some additional time in validating and evaluating your specific deployment. I see first results of the poll and it looks like the audience is not very sure what the solution looks like because we have votes for all three and maybe we spend one more second about what the solution really is. It's VM Fleet. So Manfred, maybe you want to add one more sentence about VM Fleet? The validate DCB, I mentioned it's a valid tool. It validates the data center bridging configuration. It does not validate the, it validates if the notes are reachable, if we can ping them on the network, if this works. The test cluster is the PowerShell commandlet to test the cluster to generate a cluster validation report. I mentioned we need this if we add a node. The VM Fleet is the tool suite that adds some synthetic workload to the cluster that simulates a specific amount of VMs and the number of VMs you can decide how much, how many VMs this will be. And this simulates the synthetic workload to have a real live scenario to see how your cluster will react if you put workload on it. Okay, perfect. So now we are entering the last section of today's live session. It's about integration of Azure Stack HCI into Azure or connected with Azure. And yeah, let's look at the content, integrate Azure Stack HCI with Azure. Manfred, what do we want to integrate? We already have seen the Azure integration in the Windows Admin Center and maybe we should switch to the Windows Admin Center to the live environment again to see how powerful this integration is. Because we have seen the previous live demo on my Windows Admin Center on my screen that the cluster was registered to Azure and maybe we can have a short view to the previous cluster. It was the cluster 60. Now we can see this is also connected to Azure. So it took some time because there was an agent deployed to the cluster nodes, the Azure connected machine agent to ensure that the Azure integration works, the Azure Arc integration works. Now the question is, what's my advantage to have an Azure Arc integration or Azure integration itself? The first advantage of having this Azure integration and Azure registration and Azure connectivity is that my Azure Stack HCI cluster is Azure for the workload. This means I can use Azure benefits here. So I can have additional benefits like extended security updates for my cluster. I can use Azure Virtual Desktop on my cluster when it's available. I can use these Azure exclusive workloads that are provided for Azure Stack HCI. And additionally, when I go to the Windows Admin Center, I can click on View Azure Resource. This directly brings me to the Azure Portal. If I click there, I'm in the Azure Portal and I can see my specific cluster here, my cluster 50. I can see the two nodes. I can see these two nodes are connected. In my specific scenario, this is a virtual machine. In a real-life scenario, this would be a physical server, physical configuration. Now I have several tools. Now I'm in the Azure Portal. I'm not any longer in the Windows Admin Center. And let's zoom in here a little bit on this Azure Portal here. I can use, for example, extension for the servers. I can publish additional things like Azure Monitor to the servers without directly interacting with the servers. I can use resources from Azure. I can directly deploy virtual machines to this cluster from the Azure Portal. It's actually in preview, but it's public when I deploy the Azure Resource Bridge on-premises. So this is really, really great. What's already possible there. And now if you are thinking about one cluster, maybe it's interesting to have this in the Azure Portal. But let's have a look at a list of several clusters. Let's switch to my Azure Arc view here. We can see in the Azure Arc we have different capabilities. We can add servers, SQL servers. And here I have an overview of my Azure Stack HCI clusters. And because of this Azure Arc integration, I can see, oh, I have several clusters. I have some clusters that are connected. I have clusters that are not connected recently. So for example, this one here was last connected on 11th of April. So I should check what's there on this cluster. Everything is still up and running because I'm within this 30 days. But I should ensure why it didn't connect since 11th of April. What's here. These are all demo clusters in my environment where I'm actually not working on. And we can see our new clusters we are using today, the cluster 50 and 60. Both clusters are connected. And these are connected recently. So I have all the functionality. And for sure, I can also jump to my cluster 60. I added in the last live demo and see all the functionalities, all the opportunities I have here. And I can click through to a dedicated node and also see the update management, the logs, the monitoring, the change tracking, the security settings. They mentioned also the Windows Admin Center here at the bottom. What's actually in preview to provide this via the Azure portal. So Azure Arc for me is really the future of how to work in an hybrid world because of this integration into Microsoft Azure. And now let's spend some additional minutes about this Azure benefits because the Azure Arc integration is default in Azure Stack HCI 21H2. If you started with Azure Stack HCI 20H2, this was the first Azure Stack HCI version that was published. You didn't have this default Azure Arc integration. It was an additional step starting with Azure Stack HCI 21H2. This is automatically integrated within the registration progress. And I mentioned when we are, yeah, when we are connected to Azure, we can use this Azure benefits and to enable this Azure benefits, I will switch to another cluster. I will switch to a cluster number 30 here because this cluster number 30 is an, sorry, cluster number 20, this cluster number 20 is a cluster where I installed the latest updates because this latest Azure Stack HCI updates are required to enable the Azure benefits via the Windows admin center. So I will go to the settings in this cluster also. And it's similar configuration to the cluster 60 we have seen before, but here the latest updates are installed. And now we are able to configure the Azure benefits. With the Azure benefits, we ensure that our workload sees that our cluster is and behaves like Azure. So the first step is to activate the Azure benefits for the cluster. We can see on this cluster 20, this is already done. So this cluster is already onboarded with the Azure benefits. The expiration date is on 28th of May. So this is 30 days from today, always these 30 days, but these 30 days will be extended. For sure, my cluster will also be connected tomorrow and the day after tomorrow. And so this expiration date here will move to or will be moved to the future here. Now we come to the VM level. So based on the VM level, we can decide for each VM if we want to activate the benefits for this specific VM. So we could, for example, go to the VM 01 or 02 or 03 and decide to enable the Azure benefits to ensure that inside this virtual machine, we can use the extended security updates. We can, for example, use the Windows Server 2022 Data Center Azure edition. And to be sure that this works, you also have to ensure that you have decided for a way how to activate for your Windows servers. It's all actually behind our pictures in the Windows Admin Center. I can try to zoom in to bring you this to the front. The activation of the Windows Server VMs, it's also in the settings of the Azure Stack HCI configuration here. And here we can see, we can set up the activation of the virtual machines because for sure, also in Azure Stack HCI, we have to ensure that we have licensed our server accordingly to be able or to be allowed to run this virtual Windows Server workload. And there are two ways to activate the licensing. We can use our existing Windows Server licenses. When we use our existing Windows Server licenses, maybe from the OEM channel or the volume licensing channel, we can use the Azure benefits on Azure Stack HCI like extended workloads, sorry, like extended security updates. If we want to use the Data Center Azure Edition, we have to ensure that our existing Windows Server licenses are covered with a software assurance. If we decide for the purchasing Windows Server licenses via Azure, we have all the usage rights included. And we have a subscription based model where we have an additional fee per core, per physical core per month to cover this Windows Server subscriptions on an unlimited scale. It's comparable to what we have in the bring your own license scenario when we are using the Data Center license. So here, traditional licensing with some differences if we have a software assurance or not. And here in the Windows Server subscription option, we have the Azure benefits included. But to bring them from the technical perspective to the cluster, we have to ensure to activate the Azure benefits in this Azure Stack HCI configuration in the Windows Admin Center. I think something more to add. Yeah, one sentence. To be able to manage the virtual machines from the Azure portal, we have to configure the Azure resource bridge. This is actually not covered in the Learn module because this is brand new and this is actually in preview. So these two steps have to be configured to be able to use this deployment of virtual machines within the Azure portal on your Azure Stack HCI cluster. Yeah, thanks. Two very powerful tools. We have the Windows Admin Center. We have Azure Arc. And the good thing is all those tools are still under construction. So we have a lot of cool new features every month, every quarter coming. So it's not final yet. And it makes a lot of fun to look into these tools every quarter to see what has been next. Yeah, very cool. So let's go back to the presentation and summarize what we learned about Azure Arc and Azure Stack HCI. So how does Azure Stack HCI benefit from the integration of the Azure Hybrid services? So first we have Azure Monitor. Maybe you can summarize what Azure Monitor brings to Azure Stack HCI. Yeah, many customers are asking, okay, my Azure Stack HCI cluster, it has describes, it has CPUs, it has networking. How can I ensure that everything is up and running without always looking to the Windows Admin Center? And the answer is use Azure Monitor. It can be deployed from the Windows Admin Center. It can be deployed from the Azure Arc integration in the Azure portal. And we can configure email alerting in Azure Monitor to receive a message if there's maybe a critical resource situation. Maybe we are running out of capacity or maybe a disk drive fails and things like this. This is the thing about Azure Monitor. I would always onboard this. Azure Monitor is an Azure service that is charged paper use. But we have some free amount of lock capacity in Azure Monitor that usually is enough to handle several smaller clusters without additional costs. How about Azure Backup? Yeah, Azure Backup is a great thing because we have the full redundancy in Azure Stack HCI. When we think about a stretch cluster, we have redundant sites, but you never have 100% guarantee for your services. So you can even if in stretch cluster, you can have a situation where you will run in a situation where you have a downtime or a data loss. If it's not because of a technical issue, it might be because of some user failures where maybe a user deletes some data. And Azure Backup is a great technology where we have an off-site backup to the Azure Data Center. Azure Backup can backup virtual machines. It can backup physical hosts. It can be used in combination with the system center suite, with the system center data protection manager. If we don't have system center in our environment, we can use the Azure Backup server that is provided via Azure as a free download to put it on our environment. So Azure Backup is really great to have an off-site backup for our workload for the worst case and for archiving of our data maybe if you need some long-term retention. And what is Azure Site Recovery good for? Yeah, Azure Site Recovery is let's say some extension because with Azure Backup, we backup data to the Azure and we can recover data from Azure. With Azure Site Recovery, we can decide to switch our workload to Azure in the situation of a disaster on premises because we replicate virtual machines. If you are familiar with Hyper-V replica, you can imagine how this Site Recovery works. Virtual machines, Azure Site Recovery works on the VM level in an Azure Stack HCI cluster, brings virtual machines to Azure to allow us to fail over to the Azure Data Center in case of an emergency, the outage of our whole cluster on premises to have the backup workload in the Azure Data Center. Perfect, so we are at the end or most at the end because we have one more question and then we have some closing. Again, go to akams slash polls or use the QR code and take some time to read through this question. It's a little bit a lot of text, but the question is about which disaster recovery scenario is not supported with Azure Site Recovery in Azure Stack HCI scenarios. And answer A is disaster recovery of Hyper-V VMs managed by System Virtual Machine Manager or and B not managed by System Virtual Machine Manager from an Azure Stack HCI cluster to Azure with Site Recovery based replication or C is disaster recovery of Azure VMs to Azure Stack HCI by using Site Recovery based replications. So what is not supported that we replicate from Azure Stack HCI to Azure with or without System Center Virtual Machine Manager or not supported to replicate or recover Azure VMs to Azure Stack HCI? I think important is to understand that Azure Site Recovery can be configured in the Windows Admin Center. So the Windows Admin Center is our tool for doing it. So we do not need to have we are not required to install a System Center Virtual Machine Manager. And I think what's also important to understand that we have the Azure Site Recovery to replicate virtual machines to Azure. And if we switch our workload to Azure because of an autage on premises, we can run our workload in Azure and for sure we can switch back. But the initial action is to replicate the machines to Azure. Okay, perfect. So the correct answer is C. This is not supported. So of course you can disaster recovery with or without System Center Virtual Machine Manager. This doesn't make difference. But what's not possible is to use Site Recovery based replications from Azure to Azure Stack HCI. So now we are at the end today. We talked about how to plan, deploy, validate and integrate Azure Stack HCI. Hopefully you had a lot of fun. You learned something and we invite you to do the training on our website as well. We will talk about that a little bit later. First of all, we want to invite our German speaking guests to the Azure Stack HCI. So Manfred and I are running every second week on our YouTube channels. So if you go to youtube.com slash Manfred Telver or slash Sven Langenfeld, you see all the content we are providing. Most of them are unfortunately in German, some are in English, but the Azure Stack HCI show is in German. Only tomorrow, German time 12 to 1 p.m., we will have the Azure Stack HCI show. So maybe you want to join us tomorrow. And then it's about learn more. It's not only about learning about how to deploy and plan Azure Stack HCI. We have some more content and here you can see the link where you get the additional content. And we have some additional support if you have some customers or if you are a customer and you want to test or deploy Azure Stack HCI. We have a team that is supporting you and our moderator Flo Fox is part of this team as well. And he asked me to talk about Azure FastTrack team a minute because Azure FastTrack team is an offer from Microsoft. That's for free. If you want to deploy Azure Stack HCI, you can look in your browser after the Azure FastTrack and then you will have a landing page with more details how to nominate your customer for a specific support of this team. All the team members of the Azure FastTrack team have a lot of practical knowledge about Azure Stack HCI and can help you make that solution, make your solution up and running very fast, quick and seamlessly. So today we talked about plan and deploy Azure Stack HCI. Thank you Manfred. For me it made a lot of fun. I hope for you too. Absolutely. Thank you Sven. Yeah, so great topic, important topic. So recommendation for the attendees is to go through the Learn Live module that's available via this link and also do some preparation in a demo environment like I did in a test environment. It's perfectly for testing and evaluation to use virtual machines not for production but for testing it's possible to use virtual machines also to evaluate Azure Stack HCI. Have you done those learning paths about Azure Stack HCI? Yeah, I've done several of these learning paths because most of the content is related to docs.microsoft.com and as you know I always use docs.microsoft.com for my preparation for getting my knowledge and this is also covered by the Learn Live modules. Yeah, that's a very useful recommendation to use docs.microsoft. I got a lot of questions from my team members, from partners, from customers about every solution from Microsoft and 100% of my answers are coming from docs.microsoft. So instead of asking me it would be much easier to go to docs.microsoft.com and look for the correct answer. It's a very powerful library and hopefully use this as well and then at the end we want to invite you to the next session of Learn Live. It's about plan and deploy Azure Arc enabled servers at scale. So if you are on the German time zone it's today 7 p.m. and from the worldwide perspective it's 10 a.m. to 11 30 a.m. pacific time so yeah have fun with this session as well and I say thank you goodbye and hopefully we'll see us in some other events maybe ignite. I heard that maybe ignite will be a live event or a hybrid event and that would be a good opportunity to meet in person. So thanks a lot and bye bye. Thank you for your time. Enjoy Azure Stack HCI. Bye.