 Welcome to the session, Kubernetes on Windows, A Journey. My name is Jerry Lozano. I am a software developer and I work for RxM. We are an enterprise cloud native development training and consulting firm. I carry the title of senior consultant, but I am a software developer who specializes in system level and operating system development. My work over the years has led me through Unix, Linux, and Windows. I develop device drivers for all operating systems and I have focused on application framework and related software. Architecting applications that are microservice oriented, cloud hosted, it was a somewhat natural migration for my interests and skill set. So thank you for attending. The title of this presentation is a little misleading, but we seem to be attracted to intriguing titles. So here we are, Kubernetes on Windows. Kubernetes is designed to run Linux containers in an orchestrated environment. Cates is built upon Linux concepts such as C groups and IP tables, but KubeCon itself is hosted by CNCF, which is part of the Linux foundation. So the obvious questions to start this presentation. When did Windows enter the world of Cates? And why does anyone or should anyone care about Windows in the world of Cates? Both are very good questions. So does Kubernetes even run on Windows? Well, the control plane of Kubernetes does not and it may never run outside of Linux. The official documents of Cates state that there are no plans to support a Windows only Cates cluster. But for some time now, since 1.14, Cates has permitted mixed clusters with Windows worker nodes running side by side within Linux worker nodes. Today, mixed cluster requires Kubernetes 1.17 or later. I understand for many people attending today, the idea of a mixed cluster can be unsettling, but the idea of including Windows worker node environments is powerful and makes perfect sense for many scenarios. So why Windows? The data shows that for the popular and important sites, Windows is doing quite well. Thank you. W3Tex shows that Windows is used on more of the top 1,000 websites than is Unix or Linux. Statista shows that over 70% of the global server market is Windows based. But look, we're not here to argue about statistics or the different ways to look at market penetration. The point here is clear. There is a ton of code on Windows that faces the same challenges that any application moving forward faces. That includes scalability, availability, including robustness and data integrity, manageability, and all the features that Cates enables for enterprise applications. Here is another perspective. Containers, or pods in the case of Cates running, they rely upon the most suitable OS for the job, for the microservice at hand. Containers are virtual OS after all. Well, our container-based images, for example, Ubuntu and if you compared it with Fedora, those distros have differences that affect the microservice, right? Those containers isolate the differences or features, and they allow for side-by-side hosting of the services built upon different OS bases. If Cates worker nodes running on Linux could currently host a Windows container, a container with the Windows API, the Windows frameworks, the libraries, the languages, and everything else that makes an app dependent upon Windows, if a Linux worker node could support that kind of container, then we wouldn't need this talk now, would we? But that's not the case. We need Windows worker nodes to host Windows containers. So, here we are. Like most things in our industry, maybe most things in the world in life, the actual path to hosting a mixed cluster may not be as straightforward as we might hope, or maybe as the documentation might imply. At some high level, the problems we face, I think, will fall into two categories. Those that are part of choosing two different operating systems to execute in the same environment. I'm thinking about the use of AWS and building the mixed clusters there. AWS networking options present problems of their own when you work with a mixed cluster. And then there are those problems where just putting two different operating systems on the same cluster, the same network, the same orchestration scheme. In other words, the Kate's portion of the problem are going to come to bear. This presentation is about the journey, the journey. It's a chronicle of that journey, of building a Kubernetes cluster with Windows worker nodes. To describe our journey, we're going to consider, we're going to build a representative, even though it's a simple example of a mixed cluster. And this is set up on AWS. So, here's what we have set up for the purposes of this discussion. An Ubuntu server running the Kate's control plane. We set up one Ubuntu worker node. And finally, one Windows server worker node. Remachines in the cluster. The Linux worker node might host an InginX webpage offering up something for sale, widgets for sale. On the other hand, the Windows node is going to host a service, a microservice, written in C-sharp, dependent upon .NET, the .NET framework, and ASP.NET. And the purpose of that microservice will be to authenticate the credit card credentials and numbers supplied by the user using the webpage on the Linux server. Here is a simple diagram of our example. We're going to set up an Ubuntu server running the Kate's control plane. And it will be managing two machines in the cluster. One will be a Kate's worker node running Windows, Windows Server, then in turn will be hosting the credit card authorization microservice. The other worker node will be a traditional Linux node. It'll happen to be running Ubuntu, but there is where we will host InginX, the web hosting software. Along the path of our journey, we encountered several noteworthy problems. We want to talk about each one. The first problem is setting up the Windows server. Depending on the provider or the environment you are using, this problem will vary in difficulty and cost. But here is what you must know, right? First, Kate's requires that Kate's worker nodes run Windows Server 2019. We assume that this is one of those or later requirements, but for now it says Windows Server 2019. And there are several things to note here. First, most of us use SSH to connect a new node for first time setup or debugging or monitoring or whatever to connect to our Linux console. But Windows Server starts with a graphical environment. So using AWS, we need to do our initial connection using RDP. From a Windows box, we would use remote desktop to easily connect. But beware here. Other concerns, right? In setting up Windows Server, concern licensing. We don't go into that much here on the slide, but make sure your licensing is valid when using Windows Server, especially in production. You don't need a trial license expiring 30 days after you deploy. That would be disastrous. This screen shows the shot of Windows Server 2019 worker node with the about box. And it just confirms the addition we chose to use for our worker node. By the way, AWS lets you create a Windows Server virtual machine on a micro-sized machine. This does work, but you probably won't be satisfied with the resulting performance as a worker node, not in production. The Windows Cubelet service and even a single microservice may use the micro-sized resources of the VM completely. If you need to set up on a small-sized node, you probably want to consider an addition of Windows Server that's smaller than Data Center, which is what I chose here. Okay, so the next problem, problem number two on this journey is in many ways the heart of this challenge that we're describing here today. Adding a Windows node to the cluster. Surprisingly, many of the steps involved here felt familiar. They were the same or similar to adding any worker node to a case cluster. And so you will see that we use the identical cube add-in join command that we do when we join a Linux node to the cluster. But first here, right, there are currently some restrictions on how the cluster network has to be configured for Windows nodes to join. For example, we only have two choices for the network. Flannel or an L2 bridge slash host gateway mode. We have to choose one or the other right now. If you are using Flannel, right, the VXLAN network interface, the VNI must be set to 4096 and you have to use a fixed port of 4789, right? Or the Windows worker node just won't work, not within the cluster. The L2 bridge host gateway has its own set of requirements. But remember, regardless, these are current restrictions and could and probably will change over time. I would strongly advise reading the current documentation and understanding Flannel before embarking on your own journey to set up a mixed cluster. The slide shows how we set up and add the VNI and sports port specification to the control plane cube Flannel YAML file. Okay, we also have to add Kubernetes support to the Windows worker node. You might be surprised to learn that Windows has its own command line tool to install Windows features, such as container or Docker support. You just use the install Windows feature command from PowerShell. PowerShell is Windows default command line tool. By the way, scripts for PowerShell typically end with the .ps1 extension. And look, there is a GitHub repo for the latest Windows tool to install and set up kates on Windows. It comes down to a PowerShell script, preparenode.ps1. Once it gets executed, which you see here, we end up with a very familiar kubelet and kubatom executable. .exe's on Windows.exe's on Windows. And some paths are even familiar. Look at slash var slash log. Okay, it's under that C colon drive. And nobody likes the backslashes. They should be forward slashes so that, you know, PowerShell will accept either backslashes or forward slashes. But that's another story. You know, we don't like the C colon part there. It's still pretty close to a familiar path. And then, as you can see on the right side here, we use the exact kubatom join command with the token that was generated by our control plane to get our Windows worker node to join. This process works, right? And how do we know that it worked? So easy enough to try, right? Both the, we'll use kubectl get nodes. And we'll see that in this case, right, both the Linux, there's our Ubuntu worker node and the Windows worker node. There's our Windows Server 2019 data center addition are up and running, right? And they each have their own internal IP address. I mean, they're ready for use. I skipped over having the process of adding the Linux worker node to the cluster. We understand that's the familiar part of this journey. I didn't go into it, right? So our next step here has to be to deploy our pod. And that means running the credit card auth service on the Windows node. And that means building a containerized microservice for Windows. We have to build a Windows container image. Exactly how are we going to install that service inside of our container? And so at this point, we can consider two possibilities, I think. One would be a standard Windows installer file. It's known as an MSI file. And when you execute an MSI file, it's like an RPM file, right? It installs the application making changes to the system. It mutates the system to host the application. So Windows uses an MSI file. When we use that for a web service, it installs a virtual directory under IIS. IIS is Microsoft's web hosting service. It stands for Internet Information Service. And it's typically listening on ports 80 and 443, right? A virtual directory is just a route to the desired resource. You know, it's a part of the URI. Alternatively, we could just copy the needed files, like our service is going to end probably in a .SVC extension. So we could just copy that file and any other necessary files, the DLLs, for example, to the WW root directory, right? That is the IIS default directory of resources. Either of these techniques would work when we build our container image, okay? All right, now let's take a look at our Windows microservice. We thought about implementing this as a Windows service, so the code, the implementation that we're going to use here, right, is just an example of the real Windows code that would be used in a real Kate's application. As we explained earlier, this is written in C-Sharp, and we chose to implement it as a web service. We're going to access it as a RESTful service. First, notice the interface contract. ICC auth service. Okay, it shows a simple web get. We're going to use a get, not a post, to invoke the authorize method. And the authorize method will receive two arguments. A pretty long credit card number, 64-bit unsigned integer, and an amount to charge, right? We'll pass it in as a .NET decimal data type. The actual data types don't matter too much, because we know that we will be passing data as strings over a RESTful interface. How we convert the numbers and use the numbers inside of our function will determine whether or not we conform to the data types of the interface. But then we have to implement authorize. Authorize here is just an interface method. So here is the implementation, and this is just contrived over here to authorize a credit card number. If the credit card number, as you can see, if it ends in 9, or if the amount is over $1,000, right, then the authorization request is declined. Otherwise, we're going to authorize the charge, and we'll generate a random auth code of four digits between 1,000 and 9999. It's just a contrived implementation. Programmers that are familiar with C-Sharp and Windows web services, they would be very comfortable seeing this kind of code. And so this simple web service was built and tested using Microsoft Visual Studio on a standard Windows development machine. But now we have to containerize the service. We need a Docker image. We're going to use a Docker image build command on Windows, and we're going to feed Docker image build with a Docker file that in many ways is something that we are familiar with. We start with a base image, right, that image as shown on the slide is from Microsoft slash IIS. This will give us a Windows server image with IIS already installed. There's several base images supplied by Microsoft, right, and you can imagine these would be available elsewhere, like on Azure. Well, that was supposed to be funny, by the way. Of course, it's available on Azure. These base images are not necessarily configured the way you would like. For example, ASP.NET framework isn't installed on Microsoft slash IIS. That might be a little surprising. Web service files like our CCAuth SVC file won't be processed, not by default. So our Docker file image has to enable these features, and we have to install any missing prerequisites for the service. We can see how we chose our base image and how we installed the missing features or the necessary options in this example here. We just once again use install-windows feature to install ASP.NET, or we want the web management compatibility feature so that our MSI file could install. That was why that prerequisite was installed. We also have an HTTP activation feature turned on so that our service file, our dot SVC file, will be properly processed. Okay, then all we have to do would be to install the application itself, our little CCAuth service that can be done with an MSI file as I discussed, how are you going to do that? I built my MSI file from my Windows machine, but now I've got to get it into my Docker image. First things first, right? Copy the MSI file to an accessible path within our build environment, and then run MSI exec. If we do that, if we do a run of MSI exec on our CCAuth.MSI file, then we've got it, right? We can just do a Docker image build as I'm showing on this slide over here. And lo and behold, we expect to get a tagged with CCAuth over here image that's present on my Windows machine, okay? This is almost too easy, right, in some ways. Of course, I would say what's missing from this slide would be that we should test the container. We should do a Docker run, container run, and then maybe use curl or something like that to ensure that our restful interface, our call to authorize, actually works the way that we think it should when we're within the container. And so that step was actually done when I built this example here, but that's pretty obvious that that's what you would do, right? Okay, we're almost done, but we need to make sure that the control plane launches our service only on Windows worker nodes. Remember, even in our simple example here, we have two worker nodes. One is Linux, which has no chance of hosting this container. And the other is the Windows worker node where we need to make sure that this happens, right? So we can use selectors, Kate's selectors to solve this problem, right? In fact, there is a built-in selector. It's called kubernetes.io-slash-os, and it'll be set to either Linux or Windows on a given node. So our pod configuration file should use kubernetes.io-slash-os to select Windows. Okay, you can see the CCAuth YAML file, or at least an excerpt from it, where I use that node selector with a value of Windows to make sure that our container CCAuth only runs on Windows worker nodes. We're basically gaining affinity from this value. So an important step here. With that, I should think we're ready to launch the pods. And how am I going to do that? Well, this should be very familiar. Cubecuttle run CCAuth, and we'll specify our image, which we built earlier, as CCAuth. And with any luck, it'll come back and say, pod CCAuth created. Okay, so we could use Cubecuttle get pods to confirm that everything is running, but we can use any command. Would you shut it down with Cubecuttle? You can use delete pod CCAuth, and down it goes. This should be a very familiar pattern at this point. Okay, so good for us. We got this up and running, and our entire application can work. We now have a restful interface where the implementation of the microservice is running on a Windows worker node. But we should point out that some current restrictions exist in this story. First, host networking mode is not available for Windows pods. And at the moment, service VIPs, virtual IP addresses assigned to a service, not the implementation of that service, cannot be used by Windows worker nodes. This is a pretty severe restriction when dynamic scaling is occurring in large applications. Hopefully this restriction will get lifted soon. And I'd point out that at the moment, a single Windows service implemented on Kates can only be implemented on 64 pods. So the need for service VIPs is reduced. Both restrictions need to go. That's pretty clear. You can't currently use IPv6, right? Within the cluster that includes Windows worker nodes. And secrets, right, aren't yet integrated with Kates secrets and Epsidy. There are more limitations. Actually, there are many others. I'm going to give you a link here to the current state of affairs. No one's saying this is a perfect implementation in a completed state at the moment. But, and I think this is important, right? Working with a mixed cluster is simply not that different with what we are familiar with, right? I'd like to end this talk by pointing out that there are, at least in my mind, two major reasons to want to mix Linux and Windows in a Kates cluster. One, there may be a significant installed base of code that needs to enter the Kates world. You can't dismiss this point as, oh, just port it all forward onto Linux. You know, those Windows messages could all be Linux signals, right? Again, that was meant to be funny. But it brings up a serious point, right? People that don't have to do the job always seem to be the ones claiming that, quote, all you have to do is whatever, right? Again, there is a lot of existing code that can and should be running in a containerized and orchestrated environment. And the second reason, right, to use a mixed cluster, I think is also valid. You know, Windows and Linux offer different development and execution environments. Diversity, like in life, it's a very good thing. Windows offers unique tools, libraries, frameworks, everything that has made it a commercial and production success in the marketplace. New applications, I would argue, should be allowed to choose their execution environment. And a mixed Kubernetes cluster permits that freedom. In any event, I hope you enjoyed hearing about this journey. There are obviously many more events that occurred on this journey and will occur anytime you build a mixed cluster solution. But hopefully you found some of this talk useful. I hope to meet others here who have experienced a similar journey. I'd like to hear from you. But again, thank you for attending.