 I'm just testing my mic. Can everybody hear me? I guess so. So welcome, folks. Thanks for taking the time to attend this presentation. I'm going to be talking about Windows in an OpenShift world. It's about enabling Windows workloads in OpenShift. All right, let me see how this works. Oh, awesome. So before we start to do this deep, diverse type presentation, I'd like to show off hands on how many people have used a Linux container. It's almost 100%. Now, how many people have used a Windows container? I am quite surprised. But it's Antonio, so I'm sure. Yeah, so in many ways, Windows containers are sort of new to the game, even though Microsoft has been working on this for a good five to six years. So let's start with a quick history on Windows containers. There are some differences. When you compare them with Linux containers, there are a couple of ways you could run a Windows container. And that differentiation is based on isolation modes. The first mode that people typically think about when you think about containers is process isolation. Process isolation means that you have namespaces, and then you use those namespaces to sort of protect containers from each other for multi-tenancy purposes, et cetera, et cetera. However, Windows has no construct called namespaces. So because of that, they came out with this silver construct, which they call a silo. It's very close to namespaces, but doesn't work exactly in the way a namespace works. So that took care of namespacing, and what do you do about controlling how much resource a particular container is using? To handle this scenario, what they did is they came out or they didn't even come out. They have a construct called JobObjects, and a JobObject can be used to control a set of Windows processes. So they sort of stole that JobObject concept, and they decided to use it for their containers. The other thing with process isolation is, of course, you all know with containers, the kernel is shared. So when the container is running, it is actually sharing the kernel that's on the host. However, a key difference here is that Microsoft claims of giving you no security guarantees for using containers with process isolation and a multi-tenant environment. So to get over this, there's also an added restriction in this whole world. The fact is that your host and container kernel, they have to match. So if you created your container on Windows Server 2019, you can only run it on Windows Server 2019 if you're using process containers. So server is 2016, you have to run it on server 2016. So to get over these sort of restrictions that you have, what Microsoft did is they said, oh, well, we'll help you and we'll give you security guarantees if you use hyper-reisolation. So what is hyper-reisolation? Hyper-reisolation takes virtualization and they basically run the VM in a very highly optimized virtual machine. So what do you mean by a very highly optimized virtual machine? So what they're actually doing is instead of booting the virtual machine from scratch, which is you start from real mode, you go to protected mode, 32-bit, then you transition into long mode and then there's a whole slew of drivers that you have to do with keyboard, video, mouse and all the other fun stuff. They said, oh, let's try something different. Let's now boot the kernel at a spot where it's already running in long mode, at a 64-bit mode. So that saves a bunch of time and then what they did is they paired down the device model so you don't have all the devices you would have in a particular VM in this little, highly optimized virtual machine that they're running. So that's a huge advantage because what we're used to with containers is if I do Podman and just run a container, it comes up immediately. So they were trying to get as close as possible they could with that even though they were using virtualization, so that's the reason for all this optimization. And with that, you have another advantage. Your host kernel and your container kernel need a match because hey, you have a VM here, you can just run everything within that VM so a mismatch can happen, but there's still some restrictions. Say you have a container that was built for Server 2019. You can't take that and run it on a Windows Server 2016 host. Microsoft says, no, we're not gonna love that. So that's some restrictions that folks are going to have to live with. So I've been saying container kernel images and you must be all wondering like, what is going on here? Why am I even talking about a kernel when it comes to a container image? To explain that, let's first start with what Microsoft decided to do to help their application developers containerize their software. They said, we'll give you a set of base images because what happens with Windows applications is that they depend on a whole bunch of user mode APIs. So everything that you require to run your application is not provided by the kernel like it happens in the monolithic Linux kernel. The Windows kernel is split into pieces. You have the kernel DLL that runs in the most privileged mode and then a lot of the other services are actually provided by user mode DLLs. So they said, we'll give you these base images which is the building block for all your containers. So you have to start from one of these base images. So how do you pick one of these base images? Basically, really depends on your application. Say you want to write a, you have a traditional .NET application. You're gonna say, oh, I'm gonna use Windows Server Core. Say you have a more complex application that requires the full set of Windows APIs that's out there. Then you would pick the Windows version of this container image. And they also have something called IoT Core. If you're writing an IoT app, you typically use this. The reason for all this is actually, so I was mentioning these DLLs that run in user mode. There's a very tight coupling between those libraries and the kernel. And to make things worse, that ABI is not public and it's not stable. So there's a lot of reverse engineering that happens to discover those ABI's. But if you take advantage of it and you decide to use it, there is no, Microsoft says we give you no guarantees that we'll honor those ABI's because you're not supposed to use it. So because of this sort of tight coupling between those user mode libraries and kernel, you have to be careful about what image that you pick when you're containerizing your application. The other advantage is the other reason for having a kernel is because of hyper-reisolation, you're running in a VM. If you want to run in a VM, you have to boot from a kernel. So end result, your container image not only contains your application and any libraries that it needs, it also needs the kernel in there. So it comes sort of with the game if you're running Windows applications. Now that I have the kernel in there, these user mode libraries that my application depends on, you can take the most basic Windows image out there and you're gonna find that it has a pretty large disc footprint and this can actually cause problems in the Kubernetes space because what happens is you have the Kubelet that runs inside a Windows node and you try to launch an application that's bringing down a 10 GB container image. The Kubelet after a while will say, hey, you know what? I've been waiting for too long. I think there's something going wrong. I'm gonna stop trying. So you have to do something smart. You either increase the timeout that the Kubelet will wait in these scenarios or you have to preheat or pre-download those container images upfront. The other quirk with Windows container images is that it doesn't support the latest tag. So you have to be specific. If you wanna use a Windows Server core, you have to specify a tag that says, I don't know. I think the tag is LTSC 2019 which indicates that this is a 2019 instance of Windows Server core and they have similar ones for if you wanna run Windows Server 2016 containers. So yeah, that's another difference that you would have to get used to in this world. All right, so now let's get on to a more targeted talk about what we are doing with Windows workloads in OpenShift. The question many people ask me is like, why are you doing this? Like we are Red Hat, like why do we care about Windows and what's this whole relationship with Microsoft? Well, the answer is customers. Customers have basically come and told us that, hey, we have large Windows applications, large .NET applications that we would love to run in OpenShift because we like what you did with our Linux applications. We were able to manage and orchestrate all of that using OpenShift and we love it. We wanna do the same for our Windows applications. So that's why the Windows container team was formed with an OpenShift and we have started going down this path of saying, hey, what's the best way of enabling Windows workloads with an OpenShift? So our initial target was, we wanted to start very simple. We wanted to hit just the basics, just enough so that we can say that, hey, we are getting our feet wet in this whole new sort of environment. So the first thing we wanna do is you take a Windows Server instance that's running somewhere and then you'd say, hey, I wanna add this to my OpenShift cluster. This is sort of our model that we call bring your own host model, which means that the customer is basically in charge of creating this Windows VM. They'll create the Windows VM, they'll install the container on time in it, they'll typically attach it to the cluster VPC or the cluster network, and then we can add it to the cluster. And to start things off, we decided to only support process isolation and that's because if you want to do hyper-V isolation, it means you need virtualization support and in many clouds, that means you need to turn on nested virtualization. So we do not wanna go down that path yet and so to kick things off, we are only going to deal with process isolation in the short term. And then of course, once you have the node attached to a OpenShift cluster, we wanna of course deploy a workload to the Windows Worker node. And the way we do this is how do you differentiate between nodes, right? You have Linux workers, you have Windows workers. What we do is when we bring up the node, we apply a taint to that node and then we tell people who wanna run a part on that node or a deployment, you say in the pod space, you say that you can tolerate this taint so that you get targeted scheduling happening on those Windows nodes only. So you don't have this confusion of a Linux workload trying to land on the Windows node and dying. The other key difference here is you cannot use the default OpenShift SDN network type if you wanna run Windows workloads in that environment because what they do in that case is they are actually, we are actually carving out a piece of the network for Windows, what do you call, communication. And this was done not just by Red Hat, we have a partnership with Microsoft. So the OpenShift SDN team worked closely with Microsoft to get all these pieces in place. And where the Windows container team comes here is we don't really develop the networking side, we just glue those pieces together. And then of course you wanna be able to route traffic between pods. This is both Linux and Windows pods and Windows and Windows pods. And then of course route traffic to application users who actually wanna use the service that's running inside the Windows node. So this is our architecture. And since we wanted to start really simple, what we decided to do was we'll write an Ansible Playbook that is going to initialize the preparation of this Windows VM. And the way it does it is there's a bunch of binaries that is needed for this process. It's going to go and pull those binaries from a release location and then copy those binaries to the Windows VM and then launch it in whatever manner that is required. One of these binaries that gets copied over is what we call the Windows machine config bootstrapper. It's a one-time use binary in the sense that it's not a demon or something that runs in the background. You execute it once, it's gonna come up, it's gonna do some configuration and then it's gonna go away. And what this model allows us to do is you can even look at this and say, okay, I can now just take that Playbook and convert it into an operator if I want to in the future. But for the sake of simplicity and because there are so many unknowns in this space, we decided to take this approach. So let's now talk a little bit about some of those pieces. So one of these pieces is the Windows machine config bootstrapper. Now this is the binary that we're gonna copy onto the Windows node by the Playbook and then it's going to do some stuff. So what does it do? It's going to take the worker ignition. Worker ignition is, you can think of it as a packaged file that contains some information that's pertinent to that cluster. And so some of the things we extract out of that worker ignition package is the Kubelet configuration. I do want to call out that we're just doing this for the time being, moving forward, we're gonna construct our own Kubelet configuration which is very specific to Windows. We also, what we get from the worker ignition is we get kube configs. We actually get a couple of them. We get what's called a bootstrap kube config to bootstrap the node and then we get the main cluster kube config because there are some dependencies on those kube configs by some other pieces of the software that we have inside that node. And of course we need search, so we pull them all out. And then what the WMCB does is it configures and runs the Kubelet as a Windows service. This is sort of the first step in saying, hey, you know, I'm now joining the cluster and you can start deploying applications to me. And as part of this configuration, we tell the Kubelet, hey, you know, apply these taints to the nodes that you're gonna bring up. And if you remember in the last few slides I had said, we apply those taints because it's the taint that's going to tell us, okay, this is a Windows node. So when you write your part spec or your deployment spec, you're gonna say, hey, you should be able to tolerate this stain and that will cost your application to land on the right node. The other piece of work that the Windows machine configurator does is configure the container-native interface, the CNI, and that's required for networking. How all of this is tied together is by the Windows scale-up playbook. Like I had mentioned before, it downloads all the required binaries from the release location, the WMCB, and the way the Ansible playbook is written is a part of it will execute on the Ansible host, mainly to download some of these binaries. But there's also part of it that will actually run on the Windows VM. And things like getting the worker ignition, you can't do that from outside the cluster, you have to do it from something within the cluster. So we grab the worker ignition file on the Windows node itself. We also pull the cubelet from an upstream location and copy that over. And then we also copy this binary called the hybrid overlay. The hybrid overlay is what configures the plumbing that's needed for networking to work for Windows within that cluster. This is a piece of software that was actually written by someone from Microsoft. So shout out to Chaslin, who helped us a lot here. And this was done in conjunction with folks from our OpenShift SDN team. Shout out to Dan Williams and Jacob for helping out with this. And the hybrid overlay, once it runs, it means that the basic etchiness, which is the host networking services pieces in Windows, is ready. You have your OpenShift networks created within that Windows node. And at that moment, what happens is, we can go to the next step of asking the cubelet to be configured to work with CNI, which I'm gonna call right after this. The other binary that we download is the cube proxy. I'll talk about this in the next slide. And of course, when I was talking about network configuration, there's the CNI package that needs to be done. So it collects all these files and then it copies them over to the Windows node. Once the files are copied, it starts to remotely execute these files. It first says, okay, hey, WMCB, go configure the cubelet. It then launches the hybrid overlay. Hybrid overlay talks to etchiness, does all the network plumbing that's required. As I said, the OpenShift etchiness networks are created at this point. And now we have a second step with the WMCB, the Windows Machine Config Bootstrapper, which then runs to configure CNI, which is a container native interface. For this, we sort of relaunch the cubelet with added options for configuring CNI plugins. And then the cube proxy is run. Cube proxy exists on the Linux side too, but it's sort of an optional component. But with Linux, while it's optional, with Windows, it's required. If you do not run cube proxy on your Windows node, you will not be able to allow an application to talk to the outside world. So say you create a Kubernetes service that's running behind a load balancer. If you want that load balancer to be reached by the outside world, you need this cube proxy piece running there because it's the piece that is maintaining all the network rules that allows off cluster communication. The other thing that happens when the WMCB is configuring the cubelet in the first step is the node will start to generate certs, CS call certificate signing requests, and somebody needs to sign those requests and approve them so that the node can then, the cluster can say, okay, you are a valid node and you can join the cluster. So the scale up playbook also takes care of approving those certs. All right, so now comes the sort of the exciting and a little scary part of the talk for me is the demo. I've been praying to the network gods, the demo gods. They have not been happy with me today morning. I was trying this out and the network here is a little flaky. I do have a backup video to show, but it's not that exciting. So I'm gonna give it a shot, fingers crossed and we'll see if it works. And I'm gonna sit for this. There's no way I can stand to do this, sorry folks. All right. So what I did is to make the network's life easier. I brought up a instance on AWS where I'm gonna run the playbook. The reason for doing this is the network is so slow I need to download these binaries, copy them over, and it was like most of the time I tried it with the local network it was timing out. So before we go down this path, let me show you something about the cluster. So the cluster that I'm running is a OpenShift latest and greatest 9thly version 4.4. You can see that the list of nodes that I have at the moment. We have a bunch of workers and master nodes, no Windows nodes yet. What else can I show you? I also wanna show you the host file that I'm gonna pass on to the Ansible playbook. This basically lists what the address of the Windows node is and the admin password. You're free to copy the admin password because I'm gonna kill this whole cluster the minute this talk is done. So we have all those pieces ready and the next step is going to be running this playbook. So let's go ahead and do that. So what I have here is I'm passing the host file to the main playbook, which is we've called it main.yaml, it has a list of tasks there. And one of the things we pass on to that is the location of the WMCB that we wanna use. There are two ways this playbook can run. One is you can either execute the playbook in such a way if you have Go configured, it can actually pull the Windows machine configure operator by repository and do the build for you and then run. But we don't want all customers to have to do this so you can even pass a release location that you have and it'll pull it down and use that binary instead. So let's kick off this run. And as you can see, one of the things it's actually doing now is it's downloading the binary. So this is gonna take, I don't know, hopefully five or 10 minutes. So what I'm gonna do in that meanwhile is keep going with my presentation and then we can sort of come back to this. All right, so how do I get back to my presentation? Here it is. I'm also gonna, I also should talk about what I'm trying to show in the demo, right? So I'm going to be running on OpenShift, a 4.4 cluster that I, like I showed you. It's configured with OVN hybrid networking. I already have a Windows Server 2019 instance running on AWS and that's the host file that I showed you. It had information about that particular instance and that instance was connected to the cluster's VPC. Now, in the background what we've been seeing is that instance is added to the cluster by running the WSU Playbook. Hopefully it'll succeed and if it succeeds, we'll get to play some Pac-Man. So it's a short and simple, hopefully short and simple demo. Hopefully it'll work. And if the demo doesn't work, I do have a link to a video that'll show you the whole thing but that's not so exciting. And while the demo is running, I do want to talk about what the team is thinking about next steps. We do want to move to an operator model. We do want to move away from just running the Ansible Playbook. We want to make it much easier for customers to gain access to running Windows workloads. We do not want them to go through the pain of creating a Windows VM, installing the required runtime in there. We want to actually take over most of those pieces and the easiest way to do that is actually move to the operator model. It will streamline that whole process for us. At the same time, what we have at the moment is just basic functionality as far as running Windows workloads goes as I described before. So in addition to that, OpenShift has amazing logging functionality. So we want to actually plug in into that so that Windows and Linux, they're on the same plane as far as OpenShift goes. The same goes for monitoring. We have a monitoring framework that's part of the cluster. We do want to plug into that so that no one is going to say that, oh, I'm not able to get certain features that's present for Linux but is not present for Windows. The other thing that we want to do is container native storage. We want to add support for that. And the way we want to do that is, Red Hat has a container native storage team that does take care of storage. They would do some of those pieces. We would be the clue and hopefully we'll get some traction from either Microsoft or somewhere upstream that will help with some of the bits in Windows that is needed here. So this is what the operator architecture is going to look like as we move forward. So we're gonna have what we're gonna most likely call Windows Machine Config operator. And what that operator is going to do is gonna watch for a couple of resources. One is we will create a Windows operator CRD. CRD is a custom resource definition. It's a way to extend the Kubernetes API for different applications. So we will create our own CRD, which we'll call Windows operator. And that's basically gonna describe most, at the moment we think it's gonna just describe the release binaries that need to be used, that you need to go and pull from external sources. Then what'll happen is, it's also gonna watch for machine objects. Machine objects is something very open ship specific. It's a object that describes worker nodes and master nodes. So we would add some features to that machine object that indicates that this is a Windows worker. And then the operator is actually gonna sit and watch for that particular object to see if that object has been created or not. The reason for doing that is, the minute that object gets created, we know a Windows VM has appeared on the cluster. And it's wanting to join the cluster. In addition to the object being created, it's also gonna give us a way to get the admin password of that Windows VM because it's not that easy to use keys to get to the Windows node. So we'll most likely do some further implementation to the machine API that's part of OpenShift that will give us access to the secret in Kubernetes terms in like a secret CR. And so we'll use that secret and then we're pretty much going to do exactly what we did using the Ansible Playbook. We're gonna copy the binaries over, run WMCB. So this is why I was saying that if you look at the original picture, you could have taken that Ansible Playbook and just dropped in this operator and everything should have still worked. So hopefully that's the model we are shooting for. I have actually rushed through my presentation. So let's go back to the demo and see if it's still working. Awesome, it did not work. So what would have actually happened is actually the place where it failed is it was trying to launch the hybrid overlay. And for some reason, there was an issue there. But what would have happened is I'll quickly run through what happens with the demo. So what I'm actually showing here is I'm basically showing the cluster version like I showed before. I'm gonna show you the list of nodes here. And actually what I did, I did miss showing you one of those things. This is the network configuration and you can see the type is OVN Kubernetes. And we also have to change what we call the network operator to make it a hybrid OVN Kubernetes version. And once this needs to be done up front before adding the Windows node, these are the nodes that are already existing. This is again like I mentioned before is the host file that you pass on to the Ansible Playbook. And again, we're gonna kick off and run the Ansible Playbook here. So I'll quickly go through what the Playbook is doing. Like I mentioned before, it downloads the WMCB from a release location. It then also picks up the Qubelet node binary and then from within the Qubelet node binary it takes the Qubelet.exe and downloads it locally. It also grabs the Qtroxy binary that is required and then it goes and fetches the CNI plugins. That's basically a tar.gc file which is then unzipped and will be moved over. And then what it does is it starts to copy these required files onto the Windows node. And that's the thing that takes a bunch of time so I can sort of speed that up a little bit. And here it's trying to get the Ignition file and once it gets the Ignition file it also needs to figure out what cube version the cluster is using so that it uses the right Qubelet binary and it pulls the right Qubelet binary. Once it finishes doing that it's going to pull the hybrid overlay make sure that the hybrid overlay is Shaw matches and then the first step is it's going to run the Bootstrapper. Once the first Bootstrapper operation is done by the WMCB it's going to generate a CSR. So what the playbook will do at that point is it's going to wait for some time for the CSR to show up in the cluster and the minute it finds that the CSR is present it's going to approve it. And then it's going to wait for the next step where once the Bootstrap CSR has been approved another CSR gets generated which is the Node CSR and now it's going to approve that too. And these are key factors for sort of authenticating that this Node is allowed to join the cluster. And once the CSRs have been applied the first part of Bootstrapping this Windows instance is done. And the rest of the work that happens beyond this point is mainly networking. So as you can see here the Bootstrap node has been applied the CSRs have been approved and now it starts to do the networking pieces. The first thing it does is it checks if the hybrid overlay is already running. The reason why it's doing this is it's ensuring that this playbook can be run over and over again against the same Windows instance without any issues. Once the hybrid overlay is up and running you can start doing the CNI configuration. The CNI configuration actually uses some of the annotations that the hybrid overlay once it finishes running applies to the Node object. So it takes some information from there creates the CNI configuration and then it reruns WMCB with different set of options and those set of options is what applies the CNI command line parameters to the KubeLit and then the KubeLit starts to run in its final state as a service and once that's done the next step is it has to configure the KubeProxy that I spoke about so that it allows external network communication to happen and then this KubeProxy what it does is once that's configured as a Windows service that's pretty much all the work that the WMCB has to do and you will see now that the Windows Node has been added and if you describe the Node you will see that it says the OS image being used is Windows Server 2019 and the KubeLit has started, the KubeProxy has started and at this point you can actually add Windows workloads onto this instance. So Windows workload which what I was hoping to show would look something like the following. So if you look at this YAML we have a service that is basically a regular Kubernetes service there's no difference here if you compare it to a Linux service. The key difference is going to be on the deployment side where you will describe things like, I'm looking for that. So as you can see here see the image that's being used is a Windows Server Core image and this is the tag I was talking about before you have to specify an LTSC 2019 sort of tag otherwise things won't work. This is the command that's going to be passed to this is going to be the command that's going to be passed to this Windows Server Core instance and the other thing that I was talking to you about was the toleration that it needs to be specified and so we say that oh the key that we're looking for is OS and the value is Windows so that's a toleration that maps to the taint that we have applied to the node and the same thing we also have a node selector here where we mention what OS type is done. So the end of this if we were able to launch this what do you call launch this deployment you would be able to access this Windows service from an external source. So that's pretty much all I have. I was hoping to show more if the cluster actually came up but unfortunately it did not not the cluster but the node. So let's go back to my slides. I can try. In fact I can just run it in the background while I can maybe open up the floor for questions. All right so while this is running so this is not GA. This is not even like available for tech preview at the moment or dev preview. This is something that the team is playing around with and working out. We might pass it on to certain solution architects to see if they can show this off to customers but we're not planning on releasing this anytime soon. More work needs to be done to stabilize it make it more feature perfect and make it on par with Linux workloads. So we're not ready to GA this. This is I don't know what we can call this very early alpha preview type things that we want to get a head start on this. So at this point I think the floor is open to questions. If anyone has. So the question was what about the licensing for the Windows? So if you remember very early on we had said this is going to be sort of a bring your own host model. So the licensing is sort of on the customer. If they have a Windows license they can use it or pay AWS or Azure for those licenses. And so we don't want to be taking ownership of paying for those licenses either. So it's mostly on the customer side as far as things like licensing goes. Sure. So that's the area that we have not even touched so far. At the moment, whatever default that the container runtime is using we're just going with it. But that's another piece we need to work with the OpenShift Storage team to figure out. And I did forget to mention something about container runtimes. At the moment we're using Docker because that's what the like you can find instance AMIs on Amazon provided by Microsoft that's having the Docker runtime. But the plan upstream is to go with container D. Crycontainer D as their runtime. And what Microsoft is saying is with in the whole Kubernetes space they're gonna go with Crycontainer D and they will only support Hyper-V isolation. So we do have to think about what we're gonna do at that point we might have to use some of the I3 instances on AWS that provides what is called what is that called nested virtualization. There are these instances called Nitro instances where Amazon uses their own version of the hypervisor to allow nested virtualization to happen in a more performant manner. So we do have to think about those things. Yeah, so this is a problem that you're seeing that it's looking for a CSR and not being able to find. So I think running the playbook the CSR has already been approved. So we actually hopefully ignore. I think I'm using a version of our playbook that's actually gonna ignore the CSR. Any more questions? Okay, so the question was what are you gonna do in an offline environment, right? Because we're downloading these libraries from release locations. What are we gonna do in that scenario? The answer is when we go to the operator model we're gonna have a source of truth within the cluster itself. Because we don't wanna keep using different binaries that haven't been certified and things like that. So most of the binaries that we require will come from within the cluster itself. So that way you don't have to reach outside and pull these things. They'll be all packaged as part of the operator. So when you run the operator you basically have it. All right, so the question was is this open shift specific or are we planning to make this part of upstream? So if the question is about the playbook and things like that, that's very open shift specific. We are only planning to use that against open shift clusters. It's very sort of tuned so that it makes it easy in an open shift way. But there is nothing preventing us from modifying that playbook to make it upstream friendly too. There's nothing, we're not doing anything funky or special there. This could easily be adopted for upstream use cases. The thing with upstream use cases is we need to figure out what the networking model is gonna be there. It could be anything, right? Here we're being a little prescriptive saying that we will only support a certain networking model that we're familiar with. So as we start supporting different networking models, that will become an easier thing for us to support in the upstream. Any more questions? The other thing I, let me quickly go back to my slides. So actually the piece that I was saying might be a problem is actually not a problem. I was using the latest and greatest of the playbook so that actually figures out that, okay, this is actually a node that we have already approved so it's sort of ignoring that whole process and it's gonna keep going. But I think the place it's gonna fail is the hybrid overlay which means there's some networking funkiness that is happening on the cluster which is what I was running into earlier in the morning. So I'm not so confident that that'll work, but we'll see. But I am trying to get back to my slides. I have a couple of more things that I could show in this time. Was there somebody, yeah, sure. Oh, that's a good question. I actually haven't thought about OKD but I would say that if you're able to do this against OCP, you should be able to do it against OKD. I can't think of anything that would prevent us from being able to use this with OKD. So it should be there. All right, let me see if I can get back to my slides. I have actually lost my slides. No, I'm still running Fedora. That's one thing to say that I'm working on Windows is another thing to say that I'm gonna run Windows on my laptop. I just wanted to show what I think are, yeah, so these are the couple of links that I've put out there. One is for our repo, which is the Windows machine config operator. We're calling it the Windows machine config operator. It'll soon be renamed to Windows machine config bootstrapper and we're gonna start afresh when you go down the operator model. SIG Windows is the special interest group in Kubernetes for upstream work. So if any of you folks are interested in contributing in this space, please join SIG Windows. They have a Slack channel on the Kubernetes Slack. There are also weekly meetings where people discuss about work that's happening in this area. So really useful. I highly suggest folks interested in Windows workloads to go and try this out. And of course I have to put in a little bit of a plug for Red Hat, right? We are hiring, we're hiring all over the world. In particular, the Windows container team is hiring in Boston. So if someone wants to work out of Boston, come and look me up after and I'm willing to take in your resume. So that was my plug. If I have a little bit more time, I had a couple of slides that I thought wouldn't make it. And also, yeah, so it failed at the same spot trying to configure the hybrid overlay. So I must have done something wrong maybe or there's some networking glitch happening at the moment. But sorry folks that I couldn't show you a live enough demo. So Rancher has come out with a release. So has AWS. It's all, again, just like us very early, they're not putting any support statements out there. So very early in the game. And that's why I think we're also getting involved as quickly as we can because we sort of want to get up to speed before the rest of the industry catches up. So the question was, is Microsoft Azure doing it? No, not yet at least. There might have plans to release it, but not yet. I was going to show some more slides but I think I'm slowly running out of time. So yeah, there's five minutes. I could maybe show those slides. They're more of historical interest and actually hidden them. I mean, this is just to show you the kind of evolution that's been happening in the Windows Server space. Microsoft started with Windows Server 2016 where they started to introduce process isolation, Hyper-V isolation. They had like a built-in docker for Windows Server. They slowly moved and in the next step, which was I think 1709, they came out with these container images for Nano Server and Server Core. They also had platform level support for Linux containers so you can theoretically run Linux containers on a Windows node, but it actually again happens way within a machine. That is something that we're not really looking at at least with an open shift. If someone wants to run a Linux workload, you just run it on a Linux worker node and they did the same by the time they got to Server 2019 is where they had more of the networking pieces in place, more of the container storage pieces in place. They also added support for some of the networking plugins like Calico and Flannel. So in fact, if you go look upstream, you would see that all the networking stuff is done through Flannel, but we decided to go the HyperDovion route. If you want technical details and more architecture about the networking stuff, you should ping the SDN team. So while they were doing this evolution on the Windows server side, the same sort of evolution was happening in the Kubernetes area too. I know this infograph kind of throws a lot of information out there. It's more to just show you the kind of stuff they were doing over a timeframe. So this stops I think at 113 and we're like at 118 or 119 now at this point. But so they've been working on this almost since the 1.8 or 1.6 timeframe. So Microsoft has been pretty hard at work in trying to enter this whole container and Kubernetes orchestration space for a while. All right, so that's pretty much all I had. I again apologize for not being able to show a live demo. I think I sort of asked for a little too much, trying to add an old live. But hopefully maybe it found a bug in our code so it's all good. All right, nothing else, thank you folks.