 Hi, my name is Chris Hernandez from the Cloud Platforms Business Unit over at Red Hat. In this video, I'm going to be running a Windows container on OpenShift 4 using an OpenShift Managed Windows node. It's important to note that running Windows containers on OpenShift 4 is currently a dev preview and won't get full support until it goes full GA. Before we get started, we want to make sure we have some of the prerequisites out of the way. You want to make sure you're running Ansible version 2.9 with the WinRM Python module installed, and an RDP client is also helpful. The AWS CLI is also required. You want to make sure you have your credentials stored under .aws slash credentials. The OC CLI needs to be version 4.4, and the corresponding OpenShift-install CLI needs to also be at 4.4. OpenShift installs all its installation artifacts in a directory, so we'll create a directory called cluster for this purpose. Next we'll need an install-config file, so we'll run OpenShift install create install-config and specify the directory. This will start a wizard where we need to choose a SSH key, and I'm installing on AWS, so I'll choose that as the platform, and the US East 1 region. I'll choose a domain that I own on AWS, and I'll give this cluster a name, WC in this case. This pull secret I put in my copy buffer can be obtained at try.openshift.com. Once this is done, this will create an install-config.yaml file in that cluster directory. We need to make a change to this file. We need to take the default OpenShift SDN and change it to be OVN Kubernetes. Verify this change is made, and as you can see, my network type is set to OVN Kubernetes. Next, we need to generate the manifest file for the installation. You do this by running OpenShift install create manifests, specifying the directory. Once this is done, we need to make a copy of a configuration and change that network configuration. In this file, we need to make a few changes. You need to make sure the API version is set to operator.openshift.io, and then we need to set the OVN Kubernetes settings. Here you need to take note that my CIDR for my Kubernetes OVN does not overlap my cluster network. Now we're ready to begin the install. To install OpenShift, you run OpenShift install create cluster, specifying the directory where the manifests are. The install can take some time, so I'll pause here and return when this is done. Once the cluster is installed, you'll need to run the export kube config command displayed. Once you run this command, you're now logged into the cluster. You can verify this by running oc get nodes. This will display the node count. Note that there's three masters and three workers. Also run oc version to see that you're running 4.4. Verify that the network configuration was set to the OVN Kubernetes. You can do this by doing oc get network operator cluster dash OEML. We will need to now install a Windows node. For testing purposes, we'll use the WNI CLI tool provided by the OpenShift engineering team. Please note that this is for testing only and is not suitable for installing production grade Windows servers. Once I have this downloaded, I would change, mod it, and make it executable, and I'll run the WNI command in order to create my cluster. Before I do this, though, I need to create a directory where the artifacts need to be installed. Now that I have this directory created, I can go ahead and create the Windows node. The installation of the Windows node can take some time, so I'll pause here and I'll come back when this is done. Now that the Windows server is finished installing, you will see in the output that I'm given the administrator name and the password to use to connect to this Windows server. We will be using this information to create an Ansible Inventory file. This inventory file will be used against a playbook to install the node. The information needed for this inventory file are the IP address of the node, the Ansible password, which is the administrator password, and the username, which is administrator. Note that you may need to single quote the password. Another configuration you need to keep in mind is a cluster address, which is just your cluster ID plus your domain name. Note that all the information you need was in the output of the command. Now we will run an Ansible win underscore ping command in order to verify connectivity. Once you see the green success pong, you know you have good connection. Next, we will get clone the installation playbook. Once this playbook is cloned, you can run the Ansible dash playbook command to bootstrap this machine. This will run some basic checks before the installation starts. While we're waiting for this, I have launched and logged in with an RDP session to my Windows server. Here you can run the winvert command to see that we're running Windows server 2019. Also, you can launch the PowerShell from the start menu to verify that Docker is running. Back in the installation window, you see that this may be taking a while. So I'll pause here and return when the playbook has finished bootstrapping. Once the playbook is finished running, the Windows node is bootstrapped and joined to the cluster. You can see this Windows node by running oc get nodes dash o wide. We now have a new node with the OS image of Windows server 2019 data center. We can also get this node by specifying a label selector. To specify a label selector, you run the same oc get node command with a dash L and specifying the label kubernetes.io slash os equals windows. We're now ready to deploy a sample application. This sample application has a Windows container with a web server in it. Once we deploy this application, we can monitor the rollout with the oc rollout command. Once this application is finished rolling out, we can monitor this by switching over to our RDP session. In our RDP session, we can run a Docker PS command to see the container is running. Note that this container has a prefix of kates signifying that it's managed by Kubernetes. When we open up the task manager, we can see a few Kubernetes specific processes running. The kubelet and the kuproxy is running, as well as the hybrid networking as well. Switching back over to the CLI, we can see that this application also deployed a service. We can get the service by running oc get SVC. Note that the service win-webserver deploys a service type load balancer. The service type load balancer creates an ELB with the DNS name. DNS can take some time to propagate. In my case, it took about five minutes. Once DNS returns, we can curl this endpoint. Run oc get SVC to get the name, and run a curl against the DNS name. This application is up and running. Thank you for watching this video about Windows Containers, and invite you to try out Windows Containers on OpenShift4 dev preview. Thank you.