 Hi everyone. Good morning, good afternoon, and good evening. Based on where you're joining us from, welcome to the Windows Server Summit session on host networking at the Edge. Introducing the team here real quick. Anubhan leads the Edge networking team, and all the work we will cover in this session falls under different members of his team. Basel is a PM on AxelNet, along with program areas like switch validation and Nick certification. The third one there is myself. I'm a PM for network ATC and network HUD. And lastly, Kyle is a senior PM on our team who has a lot of experience surrounding data plane technologies, and he currently focuses on SDN. With that, we jump into the agenda slide. In the initial part of this presentation, I will be going over network ATC and network HUD, after which Basel will take over and go over the Nick ecosystem and accelerated networking sections of this presentation. Before I dive into details for network ATC, let me begin by going over some of the common networking problems we observed. In a pre-network ATC era, host networking was quite time-intensive, complicated, prone to errors and misconfigurations. Having observed these problems for quite some time, network ATC is our solution to resolve them and bring ease and peace of mind to all our customers deploying their host networks. Here we have a look at an average networking setup that a customer will have to go through. For each adapter, one has to set properties like RSS, VMMQ. You would also have to set the MTU jumbo packet sizes across all your NICs and verify all your adapters are symmetric. For virtual switches, you would have to verify NIC teams, set up your storage reNICs, and set up your priorities correctly. Finally, you would also have to set and confirm all DCB properties for your storage networks, along with specific properties needed for switched versus switchless systems. On screen now, we have an example of just some of the commands you would have to run on your data center to set up your configuration. This includes setting up your VM switch team, adapter advance properties, and renaming adapters for consistency. Here we see a continued list of commands you would have to run, namely set net IP address for your desired IPs, setting up your desired reLens for your NICs and virtual NICs, among many others. You would then have to repeat this entire process on all the clusters in your fleet. You would then have to do this on every NIC, every virtual NIC, every virtual switch for every node in your cluster. Eventually, you would have to do this, you would have to do the same process for every cluster in your fleet, and yet this does not give you any peace of mind regarding support requirements, following Microsoft recommended best practices, or that your configurations are not drifting away from their intended goal state. So how does network ADC solve the complexity we just saw? Network ADC does this by using intent-based deployments. If you tell network ADC how you want to use an adapter, it will translate, deploy, and manage the needed configuration across all nodes in the cluster. Network ADC will reduce host networking deployment time, complexity, and errors. It will deploy the latest Microsoft validated and supported best practices, ensuring configuration consistency across your cluster. Network ADC will finally also eliminate any configuration drift that occurs after deployment. As seen on screen, you can use one single command which is the add net intent commandlet to deploy your intent-based networking. In case you need to override some of network ADC's defaults, you can do so using override options that network ADC will make available to you. One quick thing to note here is network ADC will not allow you to override a configuration which lands you in an unsupported environment. Now let's take a look at some key features of network ADC. Adapter symmetry or network symmetry ensures your adapters are the same link, speed, make, and model before we deploy them in the same intent. Here we have a demo illustrating adapter slash network symmetry for you. In the 22 H2 release of the Azure Stack ADC iOS, network ADC now checks your network symmetry. Network symmetry is critical to ensure prior to deploying your cluster and your intent because it improves network stability across your cluster. Network symmetry is defined as all adapters having the exact same make, model, and speed. Jumping into the demo here, we have a two node cluster with nodes 511 and 512. Here we're deploying an intent with storage and compute using the adapters B2BO1 and B2BO2. As we notice on the screen, the intent request fails to submit. Going through the error message in red, we realize that this error has occurred because we have one asymmetric adapter in the set of adapters that we are trying to deploy with this intent. Looking at adapter B2BO1 on host 511, we notice that this adapter has a different link speed and a component ID from all the other adapters, thereby making it asymmetric. To figure out what is going on, we now run the getnet adapter command to check the details of all the adapters available to us. Right off the bat, we notice that PNIC-01 is referring to the QLogic adapter whereas B2BO1 is referring to the Chelsea adapter. We want these names to be interchanged so we can deploy both QLogic adapters in our intent. To do this, we will use the rename net adapter commandlet. Here I rename the B2BO1 adapter to an unused one adapter and then we rename adapter PNIC-01 to B2BO1. Having done this, I again run the getnet adapter command to make sure that the list of adapters now looks exactly like we want them to. We can now observe that B2BO1 and B2BO2 have very similar descriptions and they also have the exact same link speed. Given this, we anticipate that both these adapters are symmetric and they should successfully submit with the intent. Having confirmed this, we now rerun the add net intent command with the same set of adapters for the same set of nodes and voila! Here we have a successful deployment of the intent given the symmetric set of adapters we are deploying with it. Network ATC in the network symmetry feature checks for adapter symmetry not only on the local host but it checks for symmetry across your intent across your cluster and across all the different nodes in your intent. So we just saw a demo of how adapter symmetry works with network ATC. Now we take a look at how auto IP works with network ATC. Auto IP for storage adapters is where network ATC will automatically assign IP addresses for storage PNICs or VNICs. Again we have a short demo for you here to demonstrate how auto IP assigns IP addresses, subnets and VLANs for your storage NICs. Before I run the add net intent command I go to node 511 and run an IP config command. We see that for storage adapters QLogic Storage 2 and QLogic Storage 1 these adapters have pre-assigned our PIPA addresses before they have been added to an intent. We now switch over to node 512 and run IP config to confirm the exact same thing and sure enough these adapters have a PIPA addresses as well before we add it to an intent. Now we run get net intent and get VM switch to confirm that we are starting off from a clean slate and that we don't have an intent or virtual switch already deployed. Having confirmed this we now run the add net intent command to add QLogic Storage 1 and QLogic Storage 2 to the storage compute intent. Looking at the output on the screen we we see that the specified storage VLAN for QLogic Storage 1 is 711 and whereas that for the QLogic Storage 2 adapters is 712. Network ATC over here goes with the default values of 711 and 712 since we did not pass in any storage overrides. To check the status of our intent submission we run the get net intent status command. Network ATC is still clearly working in the background since the output says provisioning and pending for each node. As network ATC works in the background we now run the get VM switch and get VM network adapter command just as a sanity check to make sure network ATC is deploying the things we would expect it to and that we have it all looks good and now we run the get net intent status command and we see success and completed for each of the node in the intent so we know that network ATC is finished deploying the intent and it has submitted successfully. Now here I cleared the screen a little bit and then we navigate over to node 512 and now I run the IP config command. Looking at the IP addresses on the screen we notice that the VNIC for Qlogic Storage 1 is in the 10.71.1 subnet whereas the VNIC for Qlogic Storage 2 is in the 10.71.2 subnet. These subnets here are consistent with the VLANs that these VNICs are operating in meaning the 7.71.1 subnet refers to the 711 VLAN whereas the 71.2 subnet refers to the 712 VLAN which in turn adds a lot of ease and convenience to the end user. Network ATC has also run a duplicate address detection check in the background so we know that these addresses are valid accurate IPv4 addresses assigned to the storage VNICs. Here we have switched over to node 511 and run the command IP config. Looking at the IP addresses for the Qlogic Storage 1 and Qlogic Storage 2 VNICs we notice that the addresses are IPv4 addresses and they are consistent with their respective VLANs and with the IP addresses on node 512. Here we now therefore have our storage VNICs accurately and consistently addressed throughout the intent so the one last thing that is remaining is a quick data pass test and you can believe me that it works. Now we ping the IP address of the VNIC Qlogic Storage 1 from the other node and the pink should successfully go through and voila. We have thus demonstrated the ease convenience and consistency of using automatic IP addressing for storage adapters using network ATC. As we come off from the Auto IP demo one important thing to mention here is we are also working on automatic IP addressing for switchless storage systems and we will ship that as a new feature with Windows Server 2025 and finally cluster network naming ensures consistency in the configuration and naming of all cluster networks configured by network ATC. Here we have a demo going over the cluster network naming convention that network ATC uses. In the 2022 issue release of the Azure Stack ATC iOS network ATC makes a range of improvements to cluster networking. This video goes over cluster network naming which is one of several improvements to cluster networking. Looking at the screen I run the getnet intent command to observe the intent we have deployed on our system. The name of the intent is storage compute and we have two adapters namely B2BO1 and B2BO2 deployed in the intent on each host. Having looked at this I now run the get VM network adapter and getnet IP address commands to check the VNIC setup and their respective IP addresses in our intent. After confirming the VNIC setup and their IP addressing we now run the get cluster network command to check out the cluster network cluster network naming feature. Looking at the output on the screen we see four cluster networks out of which the top two are named and the bottom two are unnamed. Looking at the named cluster networks the pattern of cluster network naming is as follows. The name of the intent that created the cluster network followed by the purpose of the cluster network as well as the VLAN that the cluster network is operating on. Moving on to the unnamed cluster network let us look at unused cluster network too. Here we observe that this cluster network's role is clustered and client and we can reasonably infer that this cluster network is being used for management traffic. Since we did not ask network ADC to take care of management intent or management traffic this cluster network shows up as an unnamed cluster network. Now we still don't know what adapter or what cause is making unused cluster network 1 pop up. To figure this out we now run the get cluster network interface command. Looking at the output on the screen we notice that we have an unused one adapter on node 511. If you recall from the network symmetry demo this is the unused Chelsea adapter that is that has still remained enabled. Since we are not using this adapter let us try to disable this adapter using the disable net adapter commandlet. After having disabled this adapter we run the get cluster network command again and now we have a much more cleaned up cluster network list. cluster network naming is thus a very logical and representative way of naming clusters. The name of a cluster makes its purpose and the name of the intent with which it was created very readily apparent which in turn saves a lot of time. Having the VLAN information right in the name also helps in detecting what adapters or VNICs are expected in the cluster network which usually reduces debugging time and complexity and increases the end user's quality of life. Moving on to some additional features network ADC now handles cluster settings which include cluster naming, live migration settings. Here we can take a look at the global cluster override screenshot and a list of all properties that you can customize and that network ADC now handles. Another highlight that network ADC recently released was the ability to do quick and easy green field migrations. You need one command to replicate your lab configuration in your production environment down to the smallest details which covers all overrides and all customizations. How to use the copy net intent command when using network ADC to copy over a tested validated intent from one system to another with all its minute configuration details like overrides, cluster network settings etc staying exactly the same. For the sake of this demo node 511 is our testing system and node 512 is our production system. Here I run hostname to make sure I am on the testing system. I then create a quasi-override. We will later pass this as an override when we create an intent. This is to show how copy net intent takes care of all finer details like overrides when copying over any intent. For this override we set the SMB bandwidth percentage to 70%. I now run the add net intent command with the previously created quasi override to create a compute and storage combined intent. I wait for a couple of minutes and then run get net intent status to make sure my intent has deployed successfully. We now go over to our production system and confirm that we don't have a pre-deployed intent. Here get net intent returns nothing which confirms we don't have a pre-deployed intent. We now hop back to our testing system and run the copy net intent command. Here we see the parameters needed for the copy net intent command which are the name of the intent we are copying along with the source computer and the destination cluster. We now hop over to our production system. After waiting for a bit I run the get net intent status command to confirm that the intent has been copied over and deployed successfully to our production system. To confirm that our override was also copied successfully we run get net was traffic class. The SMB bandwidth percentage is 70% which matches the override we had created and passed on our testing system. We reconfirm that our override was copied over by assigning our intent to a variable and running intent.quas policy override. Here we again observe the bandwidth percentage field for SMB set at 70%. And lastly we added two new documentation pages. These are compiled with the frequently asked questions and common error messages that we have seen over the past three years of ATC being in-market for the HCISQ. This will make deployments smoother quicker and more efficient in case you run into any issues while deploying your host networking setup. Moving on to some some more key features we recently also released a day 100 management UI for network ADC. This UI lets you observe and manage your intents as well as your any intent based overrides. This UI also lets you manage cluster settings and live migration settings through the Windows Admin Center portal. The second highlight for this page is the brownfield migration. This method of brownfield migration that we published does not need any downtime and you can use all ATC approved overrides to have the ATC managed cluster match the cluster that you had without network ATC. Lastly we are bringing detailed granular event logging to network ATC. This was implemented based on feedback we have received from all of you. Granular event logging will let you follow along network ATC steps. This helps with quicker resolution of failures and lesser time spent in support queues. Network ATC brings ease consistency and automation to your host networking configurations. ATC drift corrects and drift remediates any inconsistencies or changes that move away from your ideal intended goal state. Network ATC will always implement supported Microsoft recommended best practices and you can access network ATC commandlets either through PowerShell or the Windows Admin Center. While network ATC brings ease simplicity and automation to your deployments network HUD will be your one-stop shop for any operational diagnostic issues. Any discrepancies between the host and the switch. Any mismatch between your NIC properties and your network ATC intent. Any outdated firmware in use on your system will be flagged by network HUD. Network HUD is an always aware diagnostic tool that looks over your system so you can fail early and fail inexpensively. Network HUD will be available to Windows Server customers from the Windows Server 25 release. Network HUD will ship in the form of an Arc extension. As mentioned earlier network HUD is a diagnostic tool that combines with network ATC. Network HUD currently covers a range of issues from adapter health driver checks, missing ATC intents, misconfigured switch settings and much more. Network HUD integrates with both the Windows Admin Center as well as the Azure portal. Here we have two screenshots of how network HUD falls appear on your screen. The left screenshot being a screenshot for the Windows Admin Center and the right one being a screenshot for the portal. Looking at some highlights from the previous content update PCIe bandwidth oversubscription is an increasingly common problem when link speeds of ports on a card exceed the available PCIe bandwidth. Network HUD detects this issue early on and will alert you before this oversubscription leads to your cluster becoming unstable. I'm going to show you how Network HUD, a new feature in 22H2, can identify when you're unable to realize the full benefit of the hardware you've purchased due to a physical misconfiguration that limits the bandwidth on your network adapters. Here's a 22H2 Azure Stack HCI host with two virtual machines, VM01 and VM02. In the output from these PowerShell commands, you can see two VMs each with one VMNIC attached to the same vSwitch. These two vNICs have been load balanced across the two physical NICs, PNIC 11 and PNIC 12 in the vSwitch team. This means that VM01 will use PNIC 11 which should be capable of delivering around 100Gbps worth of bandwidth while VM02 will use PNIC 12 which should also be capable of delivering around 100Gbps. Now let's take a look at the virtual machines to see how much traffic they can actually send. First we'll run a traffic test on each VM to ensure that each is capable of receiving around 100Gbps. Here's VM02 receiving over 95Gbps of traffic and here's VM01 receiving a similar amount of traffic. Now let's see what happens when each VM attempts to receive this amount of traffic simultaneously. Remember that each VM is load balanced across the physical adapters that are capable of receiving over 95Gbps each. Now that VM02 has resumed sending traffic there's an immediate drop in VM01 throughput. It's clear that the VMs are contending with one another and are bottlenecked on some resource. In this case this is because PNIC 11 and PNIC 12 share a PCIe slot slot 7 which does not have enough PCIe bandwidth to achieve maximum throughput on both ports simultaneously. To verify this we'll log into our BMC for the server to see the PCIe settings for this device in slot 7. While slot 3 and slot 6 are both x16 slots slot 7 is where the dual port PNIC 11 and PNIC 12 reside. The adapters are bottlenecked by the PCIe bandwidth available to slot 7. Now with the November 2022 network HUD content update network HUD will inform you of this configuration issue. The network HUD health faults integrate with the cluster health service there's nothing new you need to configure. As a result you can use the rich experiencer already familiar with enabled through insights in the Azure Portal. By clicking on the fault we get a list of details needed to determine next steps. For example the faulting resource ID indicates the node that the fault occurred on. The description indicates the adapter is affected the network ATC intents that have been affected and the slot that is experiencing the problem. In this case network HUD is unable to automatically remediate this issue because it requires a hardware change so provides a recommendation for how you can solve the problem. Network HUD also covered a flapping NIC scenario. A flapping NIC is an increasingly common occurrence which is caused by the physical link disconnecting out of the NIC being unstable and can manifest to the users as higher latency intermittent connectivity or inconsistent application performance. Network HUD will be always aware to any abnormalities in your NIC performance and will alert you to any flapping NICs in your deployment. Here's a Windows Server 2022 virtual machine running on top of an Azure Stack HCI host. Task Manager shows the workload on this server is receiving a steady amount of traffic when all of a sudden a significant drop in performance is experienced before returning to its expected levels. Sometime later additional drops in performance are experienced but the application recovers and continues sending traffic. The inconsistent performance issues continue only now the application is getting starved and unable to achieve its previous performance levels. If your application is sensitive you may notice a performance degradation or even crashes of the application. This situation can be very difficult to diagnose while the symptom manifests inside the VM or container application this behavior can be caused by faulty switch port, cable, or physical NIC connected to the vSwitch team on the host which is causing the adapter to disconnect or reset. Each time the physical adapter disconnects the virtual machine has failed over to the remaining adapters in the virtual switch team causing a drop in performance. Sometime later the adapter reconnects and the virtual switch load balances the workloads across the available adapters in the team. If the virtual machine lands on an adapter that has other virtual machines or containers that compete for the adapter's bandwidth you may find that the application is unable to achieve the desired performance levels. Worse still an adapter experiencing this issue could also destabilize the cluster for example if it's used for storage traffic. Now with NetworkHUD we can detect and quarantine the faulty link stabilizing the cluster and application performance. You can see that the adapter is listed in this network ATC intent. As a result NetworkHUD knows this adapter is in use and should be serving virtual machines as identified in its compute intent type. NetworkHUD identifies that this adapter is unstable as it's continuously disconnecting. The event log shows that the specific adapter part of the BM network intent has disconnected several times. The next message shows that the adapters reached a disconnection threshold and that NetworkHUD considers this device unstable. Since there are several adapters in this intent NetworkHUD can take action to stabilize the cluster as shown in the last message. In this situation stabilization means removing the adapter from service since the only alternative would be to replace the faulty cable switch port where Nick. This will ensure that virtual machines and containers are no longer placed on this adapter which keeps disconnecting and causing a failover for the workloads affecting their performance. In the output of the get net intent command you can see that Network ATC has taken this action as pinnacle one no longer appears in the list of network ATC adapters for this intent. As a result your application's network performance stabilizes. In addition to the event log NetworkHUD adds health faults to inform you of unstable adapters which can be viewed in the Azure portal. As a result you can use the rich alerting experience you're already familiar with to be notified when NetworkHUD identifies an issue enabled through insights in the Azure portal. Lastly once you've resolved this issue with the adapter by either replacing the nick cable or moving to a new switch port you can add this adapter back into service in the network ATC intent. To do this use the update net intent adapter command and specify all the adapters you want in the intent. Network drivers can be a real tedious challenge to manage. NetworkHUD will also check for inbox and out-of-date drivers for you to save time checking for drivers and troubleshooting in case of any failure. We are planning to release a new set of scenarios and hard functionality in a few weeks. Some of you may have already seen this but here's a quick recap of what is coming. NetworkHUD will now detect misconfigured VLANs in your deployment and this and these misconfigurations can be either across your cluster on the same node across a teamed set of adapters or a VLAN mismatch for your workloads. It uses LLDP packets to confirm the VLANs are configured correctly and will alert you to an error if that is not the case. NetworkHUD will also be bringing functionality to detect any missing network ATC intent on a host or a set of hosts in the cluster. This will alert you in case an intent fails or is missing and will prevent any unnecessary drifting from the ATC intents. Lastly, Network ATC also checks for inconsistent PFC configurations. This prevents any increased latency or S2D crashes and Network ATC will use LLDP packets to make sure the priorities configured on the switch match the priorities configured on the host. Here we have a screenshot of a sample LLDP packet that NetworkHUD will use to confirm consistency of priorities across your host and your switch. And with that we're at the end of the NetworkHUD section of this presentation. As I said, NetworkHUD will be coming to Windows Server with the 2025 release. NetworkHUD produces HUD health faults when it observes any inconsistencies across your host networking setup and that can be inconsistencies from your switch to your ATC intents or your nicks to your ATC intents. NetworkHUD enables you to fail inexpensively and fail early so that you don't have to spend a lot of time in support queues or lose time when critical applications fail. NetworkHUD will also be bringing in Azure Monitor for alerting so stay tuned for our announcements. With that I hand it over to Basil to go over the nicks certifications and accelerated networking sections. Thanks Parm. Hi folks, my name is Basil Koblawi and today I'll be providing an overview about our nicks certification program before diving into one of the new upcoming features for Windows Server 2025 accelerated networking. So to start off, as we know software is only as strong as the hardware that supports it. This is why we put together our nicks certification program for Windows Server. Our goal with this program was to ensure the compatibility and reliability of network cards for Windows Server deployment and solutions. These nicks are certified and organized by the traffic types they carry. As you can see this emulates network ATC. There's going to be some consistency here in the way that we both set up these hardware programs as well as how we set up features to leverage those that came before. So just to quickly highlight the three management the three traffic types we have management. This can be traffic such as remote desktop, power shell remoting, active directory, just same like a few. We have compute which is used to carry virtualized network traffic and then there's storage which will be used for east-west traffic or anything using SMB. So in order to achieve this goal we set out to really make sure that we can work with our hardware vendors to ensure that these devices are validated and certified in a really easy and smooth process. For next this is done using the hardware lab kit or HLK for short and I've included a screenshot of that on the left there just to see what that studio looks like for our vendors. Once vendors validate their devices they appear in the catalog with the listed certifications and that could be seen there to the image of the right. So in the case of this nick here we can see that it was certified for management and in this case storage both storage and compute do have a premium additional qualification and for more information on that we can certainly share some log information or viewing the catalog itself we'll have this all listed there. So you know by following this easy to use tool and by allowing vendors to validate and certify devices we can now see how having this nick certification can enable Windows Server to support the latest features. In this case the feature I'll be showing next takes advantage of compute certified nicks and with that I'm excited to announce one of the latest features coming to Windows Server 2025 accelerated networking. So what exactly is accelerated networking? Well to start it is the Azure branding for SRIOV. SRIOV stands for single root input output virtualization. This is the default configuration of VMs in the cloud and SRIOV as technology has been supported on Windows Server as a platform since about since Windows Server 2012. SRIOV utilizes virtual functions and q-pairs these are both finite resources on a nick to bypass the host kernel when delivering network traffic to the VM. You can see here in the example to the right how with accelerated networking we're actually able to jump over that host kernel. Now when using SRIOV networking on Azure host configuration live migration management and failure alerting is all done by Microsoft since as a user you're not handling and working with the hardware that's tied to Azure. Now at the edge this is a different use case this is where we can this is where we can start to see some of these pain points really come through that I'll be touching on in a bit. But first let's understand what benefits that we can expect to see from accelerated networking. So to start for virtual machines we can expect to see reduced host CPU processing by bypassing this kernel. We're able to see reduced network latency so you'll be expected to see lower response time and then the consistency of that response time known as network jitter that is also reduced. On the host as I mentioned we're actually getting physical core offload by bypassing that kernel and because of that more VMs or containers can be run on the same hardware investments. So now as a user that's increased density for your containers and VM workloads that's increased performance to those virtual machines and you can actually scale down your hardware investment due to this increase in density. So now let's let's take a look here at what the current state is without accelerated networking. If you were to set up SRIOV at the edge you have to consider your host operating system is it supported you have to ensure that your network cards your network adapters not only have SRIOV drivers installed but those SRIOV adapters are enabled and configured correctly you have to ensure host hyperthreading settings are consistent you have to make sure you have a minimum cvcpu per VM depending on the network throughput additionally for the VMs you have to worry about SRIOV weight and queue pairs and placing the VMs and ensuring that the host allocates resources accordingly. That's just to name a few and on top of all that all of these will need to be monitored once SRIOV is enabled as well. Any one of these settings changes you don't get notified and it takes a while to troubleshoot through all of that. So how are we solving these pain points? We are actually delivering a cloud management solution through ARC that allows users to enable SRIOV at the edge. This feature is going to be both Windows admin both in Windows admin center as well as PowerShell it'll allow you to check your system prerequisites both before enabling and once enabling additional checks are made you can enable and disable ExcelNet both on the cluster and on a per VM basis. Once enabled as well you can actually manage the performance settings and change the network throughput that goes to a VM and then lastly we're able to actually leverage network HUD and monitor the health of your accelerated networking deployment notifying you of any faults or any sort of warnings that you might need when it comes to these configurations. So with that all being said let's see this in action we have a download together of a two node cluster but we'll see how you can enable accelerated networking at the edge. I'll be showing you one of the newest features coming to Windows Server 2025 accelerated networking. This feature brings SRIOV management to the edge and can be accessed both through PowerShell or Windows Admin Center the latter of which I'll be showing today. To start things off we're going to select accelerated networking from the left hand menu. This feature can be accessed through cluster view or through an individual server that is part of a cluster. If the initial prerequisites are met you will be greeted with this setup screen. One setup is selected a side pane will appear allowing us to enter two values the first is the intent we wish to enable accelerated networking on. We will call this intent one. This must be a compute intent configured by Network ATC. Next we can select the node reserve. This is the percentage of our nodes we would like to reserve in case of failover to ensure there are enough resources to keep accelerated networking functional. The default option here is 50% and since we have two nodes we will leave this default reserving just one node. Once we hit save we can now see that accelerated networking is enabled on the cluster with the settings we chose. On this page we also have an option to disable the feature. Our next step is to choose a VM we want accelerated networking on. Here we have four VMs spread across the two clusters. We can select a VM and then navigate to its settings where we can select accelerated networking on the left hand side. As we can see in this case the VM is not connected to an adapter that supports accelerated networking and so we'll need to make that change under the network setting. In the drop-down we have our v-switch that is connected to our enabled intent intent one. We can select that and hit save. Then we can return to the accelerated networking screen or we can now see that message has gone. We can select on and determine what performance setting we want. These performance settings affect the maximum throughput a VM can achieve. Since SROV uses finite resources on a NIC we can vary our performance settings across VMs depending on the workloads we expect to run. We now hit save and we're done. Accelerated networking has been enabled on this VM and we can now take full advantage of the benefits of SROV. Additionally network HUD gets notified of this enablement and can provide us with ongoing monitoring and diagnostics. So with that we're ready to wrap up today's session To recap, PARM dove into network ATC which brings ease consistency and automation to network deployments. Network HUD can actually then be used to help monitor and diagnose your deployment health and lastly by leveraging these two critical features accelerated networking brings SROV management to the edge providing users with a simply used management interface for improved network performance. We just want to say thank you to all those who tuned in and we look forward to sharing more information with you all soon.