 Here is a demo of running Windows containers on top of OpenShift. In this demo, we want to start simple and just hit the basics. This is going to involve the following steps. First step, we're going to create a Windows instance on Azure. Second step, we're going to join that Windows instance to a OpenShift cluster also running on Azure. Thirdly, we will go ahead and deploy some real-life workload on top of that Windows container. Here is the architecture we came up with. It consists of an Ansible Playbook that initializes the prep of the Windows VM. The core engine that's really driving the whole setup is Windows Machine Config Bootstrapper, or what we call WMCB, that sets up the Kubelet to run as a Windows service, that preps the CNI, that lays out the Kube proxy, that sets up the hybrid overlay and any other networking plumbing that's needed so that this Windows instance can join the OpenShift cluster. And this also runs a OVN hybrid overlay as against the default OpenShift SDN. This is because we want to be able to carve out a portion of the network for Windows. So let's start with the cluster that's in place. So we already have a OpenShift cluster already in place, which we can see now, there isn't really any application deployed on it as you can see. So if we say OC Get Notes, we will see a three master, three worker node cluster. And if you want to see what's hovering the worker nodes and the master nodes, essentially it's rel.co.os with the kernel version of 4.18. So it's a complete Linux-based cluster and soon we're going to join a Windows node to it. So first up, let's actually go ahead and create the Windows instance on Azure. So if you go to Azure and create a Windows server, VM, make sure you select the Windows Server 2019 data center with containers, make sure you select the right resource group, make sure you give it a proper name, select the region where you have capacity, make sure you select the right size. So in this case, we're going to select a D2SV3 for the sake of this demo. Make sure you set up the right user credentials. Inbound port none as of now. We'll go ahead and accept all the defaults on the disk setup. On the networking setup, make sure that you select the same VNet as the OpenShift cluster. Also make sure that you select the subnet of the worker nodes and not the master nodes. We can let it have a new public IP. We don't really need to set up a new NSG because we're just going to reuse the NSG of the OpenShift cluster. We do need to set up load balancing. So ensure you select Azure load balancer and then select the right load balancer. We can turn off monitoring and diagnostics because that's really not the objective of this demo. So we can go ahead and turn that off. In the tax section, make sure you specify the right tax that's needed for this VM. And then you always have an option of downloading the Azure Resource Manager template for future automation. Or you can say go ahead and create it. In this case, I already have the Windows VM created. So let's just go and review that. So once you click on the Windows node that has been created on Azure, make sure you note the public IP address and also the private IP address. And as you can see, it's been created in the same VNet and the subnet as the worker nodes. It's also in the same resource group as the OpenShift cluster. You can go and examine all the resources created in particular the network security group for the node. Make sure that you have all the right inbound security rules in place to allow traffic into the appropriate nodes. And so right now the Windows node has been created in Azure. The next step is to bootstrap it. So what we can do is with the help of the Ansible script, we can join this particular Windows instance to the OpenShift cluster. And before we do that, there is a little minor thing we have to do inside the Windows node. We actually have to run a couple of scripts to enable the Ansible connection. So this script will basically enable remote Ansible connections to be entertained. And next we need to open up a TCP port 10 to 50 so that when we say OC get logs, we can get the logs from this Windows container. So make sure you run those two scripts. And once those two scripts are on, you can come back and run the Ansible playbook which will make sure that through Ansible connection, the Windows node is joined to the OpenShift cluster. Go ahead and set that up. This should take a few minutes. And after the script completes, you should have an OpenShift cluster that has a Windows node added to it. So till now we have created the Windows instance and we have successfully joined it to an OpenShift cluster. So the next step is to actually deploy some application or some real-life workload on the Windows node. So we're gonna take a Windows container and we're gonna schedule it on this Windows node. Before we do that, we need to know the concept of Tains. Node affinity is a property of the pods that attract them to a set of nodes and Tains are exactly the opposite. They allow a node to repeal a set of pods. So if you issue this command, we can actually get all the nodes and all the Tains. So as you can see the cube masters, the three master nodes have some Tains because you obviously don't wanna schedule any workloads on them. The three Linux-based worker nodes don't have any Tains, but the Windows node has a Taint of OS is equal to Windows. And unless a corresponding container has a toleration of OS is equal to Windows, it will not be scheduled on this particular node. So let's look at this particular web container which is a Windows-based application that we're gonna schedule on this Windows node. You can see it has a toleration of OS is equal to Windows, which means it'll be scheduled on that Windows node. So let's go ahead and deploy this Windows-based web container. So we'll see create minus F. That should go ahead and deploy this Windows container on the Windows nodes. If I say OC get nodes, you see that I have my Windows node and if I say OC get pods, I should have a newly created pod for the Windows web server. If I say OC describe pod and I give the name of the pod, you will clearly see that it's running on the Windows node. So in this case, the Windows web server container is scheduled to run on the Windows node, right? This deployment also exposed this pod as a service, which means if you say OC get services, we should be able to get the external IP address for this Windows-based application. Copy that and let's hit it in the browser and there we go. We're able to access the application, which is a Windows-based web container running inside a Windows node. That shows an example of not so traffic. Next step, I don't wanna stop here. I wanna actually show you an example of how you can also deploy a Linux container side by side with the Windows container. Let's take an example of, let's say, an Nginx container. So I have an Nginx deployment here. It's gonna create one replica. It's gonna be exposed to port 80. It is an Nginx-based container. So if I say OC create minus F. This is gonna schedule and run these Linux-based Nginx containers on the Linux nodes. So if I say OC get pods, you'll see a pod created for Nginx. If I say OC describe pod, the name of the Nginx pod, you'll clearly see that the Nginx pod will be scheduled to run on the Linux node. It's been scheduled to run on one of the Linux work nodes. We can go ahead and expose this Nginx-based pod as a service. So we can hit that as well. So let's say Qubectl, expose the deployment. Let's expose it as a type of load balancer. So that's exposed now. So if I say OC get pods and I say OC get services, I should get an external IP address for my Nginx service. Give it a couple of seconds. The external IP is still coming up. Try again. As you can see, this is now available with an external IP address. And if I hit it from a browser, I should be able to see my Nginx container come up. So wrapping up, we deployed both a Windows container and a Linux container on the same OCP cluster. The Windows container got scheduled on the Windows node and the Linux container got scheduled on the Linux work nodes. And all of this is managed by the same OCP control plane. So if you have a microservice application, part of it running on Windows, part of it running on Linux, this is an excellent use case of how you can deploy the entire microservice application on the OpenShift container platform. And that brings us to an end of this demo.