 Welcome to What's New in Windows Server 2025 Failover Clustering. My name is Rob Heineman. I'm a program manager in Windows Server. For the agenda today, we'll be covering what is failover clustering, the feature parity that we're achieving in 2025, cluster OS rolling upgrade, workgroup cluster, VM live migration, GPUP, VM live migration, Excelnet for highly available VMs, campus clusters, and then finally our call to action. What is failover clustering? Failover clustering is a technique that you can use to achieve high availability for almost any application or service running on Windows Server. A failover cluster is just a collection of Windows Server machines running the failover clustering feature and configured with the correct workload, identity, compute, storage, and networking to provide high availability for your applications. In the case of VMs, a failover cluster will automatically fail over a VM from one host to another host for you. The great thing about failover clustering is you can achieve this high availability at the same time as achieving a zero data loss and zero data corruption. Failover clustering maintains your state while avoiding data loss or corruption. In Server 2025, we are bringing a lot of features that a lot of our customers really like, that are in the Azure Stack HCI. We're bringing them to Windows Server 2025. The first of those is the feature update from Windows Update. You'll be able to upgrade from Windows Server 2022 failover clusters to a Windows Server 2025 failover cluster using cow's rolling upgrade plugin. We're also bringing single node support to you, as well as stretch clustering across two sites. Network ATC and Network HUD are also coming to Windows Server 2025. The first feature here is notable and important. Of course, you can upgrade your failover cluster without any downtime. We're excited because you can use clusterware updates rolling upgrade plugin to seamlessly upgrade your cluster with just a single line of code or a single line of PowerShell that will allow you enable you to upgrade your cluster. Next, I'd like to talk about workgroup cluster VM live migration. We've supported workgroup clusters for a long time, but they didn't work with VM live migration. A lot of customers said, hey, wouldn't it be great if we could lower the amount of infrastructure that we need for a workgroup cluster and still achieve VM live migration? Now, we can bring that to you. Without Active Directory, you use local accounts on each node of the cluster to configure the cluster. Then the cluster will use self-signing PKU2U certificates to achieve all of the authentication needed to move your VM from one host node to another host node without Kerberos. We're really happy with this. It's relatively easy to build a workgroup cluster. I built a workgroup cluster, the one I'm going to show you in the demo in a moment. It was very easy to build and you can see the syntax pretty much the same syntax that you'd always use to build a failover cluster. So pretty much as easy as configuring the host systems and then new cluster, you can see I'm going to build a S2D cluster. So I'm using enable cluster S2D and then we'll create the VMs and get them watch a live migration. So let's go ahead and do that now. So here in this, we look at this cluster. This is the cluster that I just built. So in here, you can see it's a humble little two node cluster with no Active Directory controller in here. So this is a workgroup cluster. It's actually an S2D cluster as well. And so you can see here that I've got the I've got to have my virtual disk and I've got a single VM running and it's running on that virtual disk. What I'm going to do is I'm going to move that VM from machine 15 to machine 13. What I want to emphasize here in this demo, however, is that the way that you connect to the VM is the important part. So I very intentionally made sure that the Hyper-V, I've set this up so that I'm connected to the VM in two different ways here. So I have the VM connection manager, the Hyper-V manager connection on the lower part. And you can see that it just disconnected during as soon as the machine was successfully moved to the other host, right? But the remote desktop connection remained running. And that's the important point that I want to emphasize. So while VM live migration works great on a workgroup cluster, we always have to be very careful about the credentials we're using in the connection method that we're using to access the VM. Okay, there we go. So the virtual machine is back on this host I'm connected to and here we have the Hyper-V manager connection, it works fine. So the moral of the story here is just be careful with your credentials when you're working with a workgroup cluster. So next I'd like to talk about GPUP VM live migration for graphics and AI ML workloads. So this we're very excited about bringing this particular technique to customers in Windows Server 2025. There's a lot of interest in both the graphics rendering workloads and also the compute the AI ML workloads, which leverage the GPU cards. So this is a technique, GPUP is a technique that can be used to basically take the GPUs and divide them up, basically use them with multiple VMs. So here in this example, I have two Hyper-V host machines with GPU cards in them. Since I have two VMs that use GPUs, what I'm gonna do is I'm gonna create two GPU partitions or two GPU partitions on each host machine. So that's a total of four GPUP partitions on these machines. And then you'll see the VMs will be able to achieve live migration without any issues. So let's go ahead and watch this happening on a demo system. All right, so here on this demo system, this is a normal two node cluster that is built using fairly nice host machines that are using the AMD Epic CPUs. So what we're gonna do is I just wanna show you that we have, this is actually a fully loaded host, which is running about 20 VMs. Two of these VMs are using the GPU partitions. So let's connect up to those VMs. Again, I'm just gonna use the Hyper-V connection manager. What we're demonstrating here in this case is that, and remember these are fully loaded hosts, right? Hosts that are busy running a lot of other VMs. So what we're gonna do is we're gonna migrate both of the VMs over from host machine one to host machine three. And the punchline here is that the VMs are not interrupted at all. So let's go ahead and zip through this. If we zip through the video to about this point, I wanna show you here, there's a little bit of Flickr where during the blackout period where the VM connection will Flickr, but the VM itself keeps running. And so the state of the application is maintained. So if this was a very expensive, computationally expensive AI training model, then the great thing is that you would not lose the state. You wouldn't have to start over again. You could move that VM and it would maintain its state as it moved from one host to the other host. So there it goes. So you saw the Flickr there. So that's the punchline. So on a fully loaded host machine or two fully loaded host machines, we were able to demonstrate VM live migration of two VMs. And you saw that the workload was continuously running throughout that time. All right. Okay, next, I'd like to talk about ExcelNet for highly available VMs. So ExcelNet is very popular in Azure and we're bringing this technique to our on-premises customers. So you can use this technique for your on-premises VMs that have very high network requirements because this reduces the processing burden on the host machines and allows you to just much faster networking so because of the bypass. So you basically, you can just directly use from the virtual NIC, you can directly use the capabilities of the physical NIC through SRIOV and therefore you can essentially use more of the NIC, the physical NIC capabilities for those VMs. Next, I wanna talk about campus clusters. So in Windows Server 2025, we'll be supporting S2D campus clusters. A campus cluster is a cluster where the nodes of the cluster are separated into two different racks. Often this is implemented in factories where the racks are in two different rooms. So we're using S2D replication to keep the state of all the applications running in the cluster consistent. So we are implementing campus cluster in two phases. So we have the existing S2D resiliency, the resiliency that we have today, which is we can achieve four copies on a two-node system with a nested mirror or we can achieve three copies of data on a two-node system. In Windows Server 2025, we hope to add a four-way mirror so that you can also achieve four copies split between the two racks, so specifically so that you can achieve four copies on a four-node system with one copy landing on each node of the four-node cluster. Okay, now finally, in our call to action, we would love to have your feedback on the features that we're bringing to you in Windows Server 2025. So we're bringing some features over from HCI that's feature update, single-node clusters and S2D stretch clusters. We're also bringing cluster OS rolling upgrade to Windows Server 2025 using Cal, so you should be able to seamlessly upgrade to Windows Server 2025 clusters without any downtime. We're bringing work group cluster support or VM live migration and we're bringing VM live migration to clusters that use GPUP. We're bringing ExcelNet VMs to Windows Server 2025 and also we're bringing campus clusters. So I'd like to encourage you to use these features to try them out in Windows Server 2025 and let me know, let us know what your feedback is on these features. We'd really like to know. So that's all I have for you. So thank you very much.