 Good morning, good afternoon, good evening, and welcome to a special edition of Red Hat Live Streaming. I am Chris Short. I host the most of Red Hat Live Streaming. I'm joined by two fellow Red Headers here, Anand and Sebastian Anand. Would you like to kick things off, let people know what we're talking about today? Sure. Thanks, Chris. Good morning, good afternoon. Good evening, everyone. My name is Anand Chandramohan, Product Manager with Red Hat. And in this session, I'm glad to talk to you about bringing your own Windows nodes into OpenShift. Sebastian. Awesome. Hey, Sebastian. I'm an engineer with OpenShift and working on this project for a while and excited to show it off. Yeah, no. So, full disclosure, we had Ask an OpenShift Admin show with Christian Hernandez and Andrew Sullivan this morning, talking about Windows nodes and how you can bring them on board now. It was a good show, but we didn't actually get through everything because it's an office hour. So, we got some questions that made us go down some different rabbit holes. So, I'm excited to have you both on now to kind of dive into it a little deeper. Absolutely. So, let me share my screen, Chris. Is it okay? First try. Yeah, please do. Okay, perfect. So, let me share my screen. Just have a bunch of background information to cover and then I'll hand it to Sebastian for a demo. Sounds good. So, what I wanted to talk in the next 45 minutes or so before we leave some time for questions is, an introduction to BYUH, that's the abbreviation for bring-it-on hosts. Specifically, in this case, we're talking about bring-it-on Windows hosts. So, it's not bring-it-on REL hosts or bring-it-on AIX hosts. It's specific to Windows server hosts that be running on premises that you might want to onboard to OpenShift. So, you can unlock their full computing capacity. And Chris, as you mentioned, the BYUH demo that Christian Hernandez and Andrew Sullivan did this morning was on vSphere specifically, I believe. And in this demo, we'll be focused a little bit more on the cloud side, although conceptually, and theoretically, things shouldn't change the way you install the operator or the way you onboard nodes shouldn't really change. But just wanted to give users a pulse of how things work on the cloud side. So, in this presentation, we'll be doing a demo of BYUH on AWS in Azure. And then, we'll obviously point you to resources with which you can get started. Awesome. And so, this is just a brief background. This is just the OpenShift architecture slash architecture is OpenShift runs on wide variety of platforms, physical hosts, virtual hosts, private cloud platforms, public cloud platforms, Edge platforms and whatnot. Managed services, obviously. And the dominant operating system of choice for OpenShift continues to be REL and REL CoreOS. So, if you want to run your Linux workloads, REL and REL CoreOS are the most popular operating systems that's powering OpenShift. But with the introduction of Windows 7 OS, we see a lot of customers saying that, hey, they have these Windows nodes, they have these Windows applications that they need to modernize, that they need to bring to a cloud native world. And they've been asking us to support Windows working nodes in OpenShift. And so, with this architecture, there's just a small addition. We added the support for Windows working nodes in OpenShift. So, it's still the same control plane, it's still the same OpenShift. But now, you can co-locate and co-run Windows working nodes along with REL working nodes. They can communicate with each other, amongst each other to the external world and whatnot. And just as the stack up order doesn't really change, I mean, OpenShift doesn't change, it's fully Kubernetes, all the cluster services, platform services, it's all, there's really no change to those building blocks. And specifically, how BYOH or bring your own Windows post to OpenShift works is, so, you have, let's say, an OpenShift cluster, that a default install has three master nodes and three data nodes and three control plane nodes. And using something called machine sets or machine API, you can onboard Windows nodes. So, we jaded the first version of the WMCO operator sometime, I believe, last December, and that was specific for managing Windows instances using API, using machine sets. So, if you had a bunch of Windows instances on AWS and Azure, and if you want to onboard them to OpenShift, you could have used machine sets to onboard them. But we talked to our customers and they said, hey, this is great, runs great in the context of flow, but really all my Windows applications running on prem, right? We've talked to customers in the automotive industry, in the healthcare industry, in the financial services industry, and all these customers had one ask, hey, I have Windows applications running on Windows Server, predominantly on prem, on vSphere, bare metal and OpenStack, and help me find a way to move these workloads to OpenShift. And that's when we said, oh, we need to help these customers who are stuck on prem and get those nodes managed by the same control plane as well. So, we introduced the offering called BYOH, which is like Sebastian said, we've been working on it for a while now, and soon to be going to in the next week or so. And with that new offering, you'll be able to onboard your on prem Windows server instances, you know, be running on bare metal or OpenStack or readout virtualization, or vSphere, as is very popular, onto an OpenShift cluster. That's also running, you know, rel work nodes, or for that matter, even other Windows nodes managed by machine sets. And this is an important distinguishing factor, you know, like to, you know, drive home. Yeah, I love this aspect. Yeah, yeah, yeah. So, you know, the famous animal farm, you know, analogy, you know, pets versus, you know, cattle, right? So, you know, pets is like, you know, these small dedicated servers that you have in your data center on premises, you know, that you take care of, you know, very well, right, that you maybe have static IP addresses, maybe you have dedicated host names that, you know, you have custom automations running on, and you only have a few of them, but you really, you know, take care of them because, you know, they're important, you know, to run a price. And the other side is, you know, cattle, right? Cattle is like, you know, instances that you run your workloads on, but you don't really care about it in the sense that, you know, you can blow it up and, you know, bring back one online, you know, pretty easily. So these instances are ephemeral, usually cloud instances that you can, you know, just, you know, throw away and, you know, bring back in when you need, or even just create a new one with a different IP address and, you know, you don't care, right? And so in this context, the BYUH story or the BYUH feature is to treat your pet Windows instances, right, that are lying on your private data center, on premises, wherever, that you have custom automations on, that you do custom patching on, that you have custom management software written against. And these are only a few of them, right? These, you know, you don't, you know, manage, you know, 2000 dogs or 2000 cats in your house, you know, but, you know, you have a few of them and that's why, you know, you'd like to onboard them and, you know, take care of them in the same way you take care of other, you know, cattle, right? So I'd like to drive home the point that this is a way of onboarding pet Windows server instances on the OpenShift. We already support the cattle approach of, you know, having a cattle of, you know, Windows server instances managed by machine sets. It's already in the product GA, it's been on for almost a year. And so that's, you know, one disengaging factor I'd like to make and I'd like to hand it to Sebastian to, you know, talk us through pre-rex and then do a demo. Yeah, thanks. So just to cap that little point off with pets versus cattle, right, to the workflow for adding a BYUH Windows node is a bit different than adding the, like, cattle version of machines. So to add this pet to add a BYUH node, we're going to have to have a few prerequisites to ensure that it can be added smoothly to the cluster. And in order to do that, right, we need to make sure that, well, first off the cluster, we need to make sure that we're running OpenShift version 4.8. There will be, so BYUH is a new feature to 4.8. And then with 4.9, 4.10, subsequent releases of WMCO will also be supported. We'll have new releases for each version of OpenShift. We need to make sure that the SDN that's being used is OVN Kubernetes, specifically with hybrid overlay networking. This is needed in order for Linux workers and the Windows workers to communicate. If you don't, that needs to be enabled at installation. So make sure that that's happening. Windows Server 2019. And so LTSC, Windows Server 2019. And then Windows Server builds 2004. And in the future, more versions. But right now, it's specifically Windows Server 2019 for AWS, Azure, as well as 2004. But for vSphere, you cannot use Windows Server 2019 LTSC. You should be using 2004 at this point. That's, you know, certain patches hadn't made it back to the LTSC. So you're going to need to use a newer version of Windows for that. We need to make sure that the BYUH instance that you're creating and trying to add is on the same network as the Linux worker nodes in the cluster. The worker nodes need to be able to SSH into the Windows node. So you need to make sure that there's the SSH. Demon Open SSH in most cases is going to be set up and installed that port 22 is open. That's the authorized, that there's the authorized key that you've given to WMCO that that private key has an authorized key entry and the Windows node, right? So when we're not using passwords, WMCO is specifically given a private key and it needs to be able to connect to the Windows node with that private key. And there's a full list of these in GitHub and in OpenShift docs, if not now, then before general availability. And yeah, that's it for the pre-rex. Just make sure you file the doc and there hopefully won't be too many issues. Get in the next slide. Yeah, so with those supported Windows versions, Vsphere is the kind of outlier here. You need to use a newer version of 2004 or later or 2004 specifically at this point. And AWS and Azure are more, you can use LTSC for that. You can go ahead and up. Yeah, I think on how to do you forward the demo session. Awesome. Yep, I will present. Awesome. And folks, if you're watching out there, feel free to ask your questions. I can get them answered. No problem. Okay. Terminal. All right. Cool. So, you know, I've got an OpenShift cluster right now running this type, 40.10. And so right now I've got just a fresh cluster, master, worker nodes, Linux, everything. And let's see how we can add a Windows server to this. So if we go to, so I'll be showing off AWS. Let me just get this. Okay. All right. So in the AWS console, right, we have our workers and our masters. And I've added a few Windows nodes, but I'll just run through the process of how you would do this. So let's just find the Windows containers, AMI. So it needs to have Docker installed. And let me just go over the prerequisites again real quick. Right. So Docker must be installed at this point. The instance must be in the same network. SSH server must be running. Network rules. The administrator is set up in the authorized SSH key. And the host name has to be lowercase. Okay. So select Windows. We'll use an M4 large. For this demo, you might want to go bigger depending on what you're doing. You know, find my clusters, VPC, subnet automatic. And for here, for the user data, so to make things easy, kind of see. So that's hard to read here. You go for these large images, Sebastian, because the image sizes are usually bigger Windows, right? Yeah. And for the user data, right? So what I'm doing here is just having SSH preset up. So, right. So install the open SSH server, enable SSH, and the container logs port 10 to 50, start SSH, open SSH, and then adding in an authorized key to the authorized key file. And so, you know, just basic stuff to get SSH working as well as opening up the logs port that's required. Right? So I can add storage, tags, and then launch it, but I've already launched it, so I'm not going to do that. Sebastian, one quick question on the AWS site. You know, I'm more of an Azure user. So Azure, you have these, you know, virtual machines, you know, these Windows server, you know, virtual machines with, you know, Docker runtime, you know, pre-installed. Is it something not available in AWS or just want to know if, you know, to get a benefit of, you know, let's say using Azure for running, you know, Windows versus AWS? Yeah, no, those images are available, right? So if you go to the AMIs and you look for, so there's other ones, but if you just search for Windows, you can see with containers. Got it. Okay. And the other naming is a little weird, but they're up there. Okay. Faced with containers and, but don't use 2022. Sure. Right. Okay. So now that we've got our VM set up, SSH is enabled. Make sure you go and you change the host name to lowercase. If it was uppercase, I think that's the uppercase is default in AWS. We can look at ConfigMap objects. Oh, sorry. So I've already installed WMCO onto the cluster. Don't need to walk through that. No, I don't. I mean, folks, if you wanted to walk through it, let us know. If not, we'll just push forward. Go ahead. Just push forward for now. Yeah. So all you got to do, go to operator hub in the cluster management page, just click on WMCO, when this machine config operator, hit install. All right. Then you're going to want to create a cloud private key secret. Right. So this is including, so this is a secret that has data as from, as the private key dot pen key, it has your private key. So I'm not going to share that. I don't want to give away my private key, but you're just giving access to WMCO to your private key so I can use that to configure machines and a private key, sorry, not your private key. Probably should use any unique one for that. But yeah. So once WMCO has that, we can look at this ConfigMap object and this ConfigMap is kind of the API for BYOH for Windows. So what we have here is we have this couple of those instances. It is going to be in the WMCO namespace and as your data, you're going to have the key being the IP or the DNS name or DNS address of the VM you want to add. And then the data is going to be this key value pair of username equals and then whatever the username of your administrator user is. So in this case, it's administrator. All right. So we just create that ConfigMap and WMCO got going. So we can, we'll just have to wait like five minutes or so, but to see the node object actually appear, but we can kind of see it going. So it's trying to connect and then maybe after one to three minutes, it'll make its way SSH in and then do everything it needs to do in order to configure the node. And we can just kind of look through logs of me doing this previously. You know, it configures KubeLit and everything. And okay, here it goes. It's copying all the files, configures KubeLit and does everything it needs to do to make sure your VM joins as a node. Now, that'll pop up as a node in a minute or so, but kind of walking through how, okay, let's say you want to take your node down in order to like update the underlying operating system, which by the way, is something that WMCO does not do. That's up to you as a user to make sure that happens. So if you want to do that, you just go and you can either delete ConfigMap or you can just comment it out and then WMCO will reverse everything that's done. It's going to take away the KubeLit, it's going to take away all the Windows servers that it created and delete all the files that it copied over. And you will have a something very close to what you started with. And I won't do that now because I want to actually watch it be configured. And then let's see, get known. And then so we have the BYOH node. It's coming up, you know, it's still being configured. That's why scheduling is stabled, but we can take a look at it. You know, we can see that. It's Windows Node, so it's Docker, running Windows 2019 data center. And it's pretty much the workflow, you know, with machines, with cattle, it's a lot easier, or not easier, but more less steps. So I'm looking at the version number and I'm seeing, you know, a lot more after it than I would normally. Is that a window? Is that like indicative of Windows, like seeing that 1398? I'm assuming that's a release number of some type. 1398 on the bottom. No, no, no, in the KubeLit version. Yeah. Like that machine type is just, yeah, so different. I mean, yes, it's Kubernetes 101, but yeah. That's just an artifact of how we're building the KubeLit. You know, you'll see some SKU, you know, because, and this is the community version too, so it might also be a little off there. But, you know, we're building the KubeLit specifically for these nodes, not during the process, but like when we're shipping WMCO, you know. So, you know, we're maintaining it separately from what the Linux nodes are using. Got it. Okay. Cool. Thanks for clarification. All right. I'll stop sharing. Awesome. Any questions on the chat, Chris? No, not yet. There is a delay though, so hopefully, you know, a couple will come in. But folks, if you do have any questions, please feel free. We have experts here. So, if you're trying to get Windows nodes working in your environment, remember that the Windows machine config operator right now is currently Community Edition. That does impact support ability in some cases. So, we're almost all cases, I feel like. You can check me on that and I'm, but yeah, just keep that in mind as you're going about, you know, using it right now. Is it ready for prod? Maybe. That's kind of a risk assessment you have to take, right? And go from there. But yeah, this is definitely coming to GA soon. So, yeah. Yeah. And I just want to correct myself when I said that there was version skew, I meant because this isn't one older build from Community and not going to go out with a general availability, but two, you'll see that difference with the, you know, Dash 1398 and then the commit. It's just showing up differently because of the way that we're building it. So, yeah. Let me not use the words version skew because I feel like that could be interpreted a little differently. Yeah. Got it. Okay. Cool. So, in Windows machine config manager operator, the, it's fascinating to me, first of all, and it's enabled by, you know, a lot of changes that have happened due to our work with Microsoft and just Microsoft in general embracing open source, right? Like installing SSH on a Windows server that was like a completely foreign concept until a few years ago, right? Like, so, you know, there's been talk about it for a long time, but now it's finally here. What were some of the challenges, I guess, that you had to kind of tackle for lack of a better term and making the machine config operator for Windows work, right? I guess, right? Like that communication layer, sure. You had to kind of build that up. But what else? Anything? Yeah. You know, coming as, coming from a, like a Linux developer and everything and kind of, you know, understanding like how things are done in the Windows world, it's very different than, like, how Linux works, especially in, like, in Red Hat, there's not many, like, Windows subject matter experts. So, we all kind of had to, like, level up our Windows knowledge, find out, like, the best way to do things in that, you know, kind of containerization on Windows is a hugely evolving, like, field subject topic. So, you know, things are being developed as we were working on the operator, you know, and it's adapting to these new things that are coming out, you know, like pretty soon Docker is being phased out of Kubernetes or search container D. So, that's going to be a thing. Host process containers, which is kind of, you know, just a lot of new, we have to be adapting as things are coming out, which is, and things are coming out at a faster pace than Linux. And, yeah, you mentioned Docker and the question just came in, are there any plans to remove the dependency on Docker and deploy container D directly? I'm assuming we're waiting on Microsoft for that? So, we're ready to start on that. Docker, the Docker shim is being removed from Kubernetes, and I believe 1.24. Yeah. So, we do have a timeline, a time limit. Yeah. No, we were, you know, the work couldn't start until Microsoft and everyone else working on Instinct Windows, you know, kind of gave the okay for testing and all that. But, you know, we're on our way there. Hopefully, the user won't see, you know, people that have deployed their images using Docker and everything. I don't think the plan is for them to not have too much of a, or no interruption, not have any interruption switching over. We should make the process easy for them. Right. And to kind of, you know, level set everybody, right? Like Docker uses RunC under the hood. Container D uses RunC under the hood. RunC is OCI compliant. So, it's an OCA compliant thing to an OCI compliant thing. Shouldn't be too abrupt, obviously, unless like you're using a version that doesn't, using a version of Windows Server that doesn't support container D and still needs Docker kind of thing. I guess will there ever be a scenario where you envision people like having that mixed mode environment of older Windows nodes? I mean, that's that pet versus cattle kind of scenario. But could you see like people having like older Windows nodes with Docker running and then newer Windows nodes with container D running kind of thing? Is that a scenario you've thought of? So, well, yeah. So, they both use RunHTS. So, RunHTS just the Windows RunC, basically. But no, that's not a scenario that we're trying to have. So, you know, once OpenShift, whatever OpenShift version lines up with 1.24 Kubernetes, that won't even be a possibility, right? So, good point. Yeah. So, and the way that we're doing things, unless, you know, there is a extraordinarily helpful use case for this, right? Will be, you know, if the user like wants to opt in the container D or we're saying that you need to, we're opting them in, you know, then that'll be across all nodes, right? Because we need to tell Kubernetes when setting up the node that, hey, this is the container, this is the socket that you should use to connect to the, you know, the container runtime. Awesome. So, plans are in progress to answer your question, Gargantua. I won't even try to say your name, your screen name, but Anand, anything to add here, right? Like, what fun things have you learned on this journey? Yeah, actually, I was going to show a demo on Azure if time permits. Oh, yeah, sure. Yeah, go for it. But yeah, before that, I think just like to, you know, write on top of Sebastian's, I think, you know, really good responses. I think a couple of fun things that made it worthwhile was a lot of working with Microsoft and VMware upstream. One of the reasons why, you know, when Sebastian showed the table of supported versions, you see long-term, you know, LTSC version of Windows 2019 supported for AWS and Azure, but, you know, the SAC version for vSphere, right? And the reason for that is we ran into a VX LAN bug on vSphere and we ran into it and then we took it to Microsoft and then Microsoft said, sure, you know, we can put it in the SAC, putting the fix into an LTSC is an 18-month process for them, right? And that's why you see that, you know, we support only the SAC version on vSphere, right? So a lot of fun things along the way where, you know, you're kind of dependent on the underlying platforms we run, like vSphere, we are dependent on upstream contributions of Microsoft, you know, the KubeNet or providing any OS specific fixes to Windows. So a lot of collaboration there. And the other fun thing is, you know, once we save Windows containers or, you know, support an OpenShift, customers are, you know, they jump to it and they say, hey, do you support serverless, you know, on Windows nodes? Oh, even, yeah, wow, yeah, good point. Yeah, do you support like service matches? Do you support, you know, all the upstream, you know, cluster services like, you know, Quay and, you know, whatnot, right? Storage, yeah, everything. Storage, exactly. So you've seen the CNC of landscape, right? Oh, it's impressive. But I would say maybe less than 10%, maybe 5%, maybe 2%, right, applied to Windows, right? So, you know, customers like the fact that we're supporting it, but immediately they have, you know, very custom needs and, hey, can I manage, you know, a service mesh deployment on top of Windows nodes? Can I, you know, store Windows images in Quay? Can I use, you know, Red Hat ACM or, you know, the new ACS operator, you know, do, yeah, like, you know, run validations and, you know, whatnot for Windows instances? And the short answer is no. I mean, we're working very hard to, you know, tighten up the cluster to make sure that all the essential services like monitoring, logging, you know, security, storage are in place before we, you know, bake in other upstream, not upstream, but top of the stack services like, you know, service mesh and Quay and Quater. Awesome. You want to fire up a demo? Sure. Let me share my screen. And so the workflow is going to be pretty similar to, you know, Sebastian, a demo of, you know, Azure of AWS except the fact that this is on Azure. So here is the, you know, the high level steps, you know, I'll follow. First thing is I'll create a Windows instance on Azure. So think of it as your pet. Think of it as your dedicated server lying in your data center that you have a static IP address, you know, maybe you have a nice hostname for it. So we'll fire up a VM on Azure. Next step, we will install WMCO. And then third step, we will see how WMCO can use the config map that Sebastian was showing to onboard the BOH instance. And then finally we'll deploy some workloads and, you know, take it for a spin. So you go to Azure, you know, you create a type of Windows. So you look for, you know, Windows server images. So in this case, I'm running on Azure. So like Sebastian said, you could go with either Windows Server 2019, which is the LTSC one, or you could go for, you know, 2004 with containers. So you can go for either. So let's just go with 2019 data center with containers, which is the LTSC version. Let's say create. And then Sebastian mentioned that there was a prereq, right, which is the fact that the BOH instance should be on the same cluster network as the other, you know, Linux work nodes, which means they should be able to talk to each other. They should be able to ping each other and whatnot. So to keep things simple, you know, this OpenShift cluster is also running on Azure, by the way. I'll just, you know, place the Windows instance in the same resource group as the existing cluster just to make sure, you know, there are no network problems. So that's the resource group into which OpenShift has been installed. And I'm installing the Windows virtual machine onto the same resource group. You know, any name, let's say, BOH instance, right? Right now, one of the other prereqs is you cannot use capitalized names. You have to use, you know, lowercase names. Hopefully, that's not an issue for most customers. There is a bug right now for using capitalized names. So, you know, my recommendation is right now, just use lowercase names, then select your, you know, your config, your image type, your size, you know, set up your password, and make sure that you get RDP into it, because one of the things we have to do is once the Windows VMS setup, we have to go and install OpenSSH so that, you know, I can communicate with other nodes and WMCO can discover SSH into it. Like if you remember the last one's demo, the first step that WMCO did when it detected the config map was, it said, you know, initializing an SSH connection, right? So, you know, Windows Server needs to have an SSH server, so WMCO can SSH into it. So you need RDP access so you can go install the SSH server on client. Yeah, and if I could just add, if you're, you know, if you're not just trying this out, if you're going to be doing this long term, if you want to add a lot of nodes like this, it's instead of doing this manual process of RDPing or, you know, like copying that username every time, right, you can also have your own image that has already had this done already with the correct SSHK. That's right. And I think Chris, in today's demo with the Christian Hernandez and Andrew Sullivan, I think they showed a golden image on VSP, right? If I remember correctly, yes. Yeah. Cool. So next step, I set up my disks, you know, whatever disks you want, I'm just going to go with the default. Networking, again, like I said, you know, it's going to be the same VNet as the cluster. And I would place it in the worker subnet, not in the master subnet because, you know, it's, it's a working out, really. And then you can, you know, pretty much, you know, review and create, you know, the virtual machine. In my case, I'm not actually going to create it because I have it created. This takes about 15 minutes, 10, 15 minutes, Windows VM to create. But this is pretty much a workflow. The key thing is make sure you use lowcase names, make sure you place it in the same resource group, VNet and subnet as the other Linux working out. So communication is, you know, pretty straightforward. And once you create the virtual machine, you can resource group, we can look for the virtual machine, test viewers inst, and that, you know, you can note the public IP address and also the private IP address. So you note the public IP address, which is, you know, 23. And that once you're inside, you know, like I said, you'd have to install the open SSH, you know, utility. And with that, you would pass in the public key from your local machine. So you can access it into the machine if you want to. But, you know, there is a script available for installing open SSH, it's called install-openSSH.ps1, it's available in our Git repo. And that pretty much, you know, sets up SSH. And that's all you have to do on this node. The only other thing you might want to do on this node is if you're, you know, pre-pulling an image size that's pretty large, let's say, you know, five gig image or 10 gig image, you might want to, yeah, pre-pull it so that, you know, you don't want to have, you know, WMCO, you know, weight or, you know, timeout, right? Yeah. And I would be careful about using the scripts. Take a look at the scripts, see what they're doing, and then, you know, make your own scripts that do the same thing. Just if it's not in the official docs, it's not supported by us, right? So don't just rely on that script to being there in our repo. Just see what it's doing, and then you can just do the same thing. That's right. So now the Windows instance is ready. Now, you know, it's a pet, you know, it's the pet is ready to be onboarded. So you can now go to the OpenShef console, you can go to, you know, operator hub, you can look for, yeah, there are two versions of the operator, right? This is a fully supported version, which is GA, that supports only, you know, machine sets. And then the community version, which is, you know, a community version that's, you know, kind of user to your own risk, this is the operator that we use to introduce new features, you know, it's kind of like a testing ground for, you know, any of the new feature work we're doing. In this case, the BYOH feature has been baked in only into the community operator. Full support is coming, you know, pretty shortly, like I said, GA in a week or so. So click on the community operator, and then you can, you know, click on install. I'm actually not going to install it because it already is installed. As you can see, it says that. That's what I was about to say. Yeah. So it's pretty straightforward. It takes a couple of minutes to install. And after that, you can go to install operators and see that, you know, the community version of the operator version 3.1 is available. And then the next step is you go set up the config map, right? So you can go to the console, we can say, you know, create a new config map. And then, you know, like Sebastian was, you know, showing, right, you know, this. So if you have four, let's say four, you know, Windows 7 instances, right, like 153, 154, 155, and maybe, you know, 156 too. You can set up as many nodes as you want, right? And WMC over, you know, basically take this config map, take the first, you know, IP and the config map, you know, set it up, and then move on to the next one and so on and so forth. Right. And like I said, in my case, I already have it, you know, set up. So I'm just going to show you a preexisting config map. So this is the Windows instance. So I have one server already bootstrapped into OpenShift using this config map. And all you have to do is enter the IP address and username, save it. Good to go. And you can come to the logs of the Windows machine config operator. And if you watch it, you pretty much see the same logs that, you know, Sebastian was showing saying, hey, the Google CS try to SSH into the node, it's trying to, you know, set up the Q Blitz, set up the Q Proxy, set up OVN. So that's how it starts. As you can see, it initializes the SSH connection. And it starts configuring the node. Right. Starts laying down everything. Yeah. Correct. And this is really the secret sauce of the operator is it automates all the steps needed for that Windows node to be, you know, overconnured off the OpenShift cluster. Yeah. And we'll manage all the Kubernetes specific binaries, right? We're copying over the Q Blitz, the Q Proxy requirement, like all the monitoring the Windows exporter, everything. And then one container D is, we'll be shipping container D too. So hopefully you just won't have to worry about that too much. Indeed. And once it's, you know, installed, you can actually go under compute and go into nodes. And you will see that your test to be, which is the BOH node that I added has been set up. It's, you know, ready. You can click on it. You can again, you know, look at events if you wanted to. But you know, it's good to go. It's good to, you know, be set up. And you can start deploying workloads on this. And the other thing you see along with nodes is you see that there is also a Windows worker, right? You see that this Windows worker is a part of a machine set. It's a machine. Windows worker is a machine. It's a part of a machine set called WinWorker. And you can scale up this machine set. You know, it has one machine, but you can scale it up, you know, if you wanted to, you know, three machines or how many machines you wanted to. But the point I wanted to make was you see that there is a pet instance, and then a cattle instance, both being managed side by side in the same OCP cluster by the OpenShift Cloud. Nice. Well, I mean, when would you see that kind of scenario happening, right? Like, I mean, are we supporting any specific kind of hardware and Windows right now? Yeah, it's just a hybrid cloud use case, you know, Chris. So let's say, you know, a healthcare company, you know, is in the process of moving to the cloud, right? Makes sense. And so, you know, they set up, let's say, you know, a cluster on, you know, AWS Azure, and they start deploying new workloads against those clusters. And then in the same cluster, you know, since they're in the cloud transformation path, they have existing, you know, hardware, you know, lying in the data center that they would also like to repurpose so that, you know, they're computing, you know, power is not wasted, right? Right. So this is really a hybrid cloud use case. This is more of a path to the cloud use case where you're managing both side by side. Obviously, not starters, you know, you want everybody, you know, off to the cloud, you know, hopefully that happens, if that happens one day. But till it happens, this is a way you can manage your cloud instances and your on-premise instances in the same control panel. And so you see one cloud instance, but like I said, you know, you can go up to the machine set and you can, you know, scale it up to, let's say, you know, three, and, you know, obviously even those takes, you know, 10 to 15 minutes to come up. But you'll see that the additional two, you know, instances are coming up and if you go to nodes, you know, once they are fully provisioned, you'll see them. But these three instances are cattle because it's very easy for me to nuke them and then, you know, reprovision them as needed. You know, I hold no extra love for, you know, my cattle. Exactly. Are there plans in the future for the GPU support on Windows nodes that kind of thing, right? Like that's something I could see maybe if you folks being interested in right now. This has come up, you know, I kind of want to say it's not in our short-term backlog, maybe some medium to long-term backlog, but definitely for running, you know, data intensive workloads, running HPC workloads on Windows, this has come up more than once. So we're aware of it, but you know, I would say it's not a three, it's not a three to six month time horizon. Got it. Okay. Makes sense. Anything else folks should know when using Windows nodes? Yeah, the only other thing I would say is, you know, you can start deploying any type of, you know, application against it. So in my case, you know, I think I'm going to deploy like a web server pod. And the WMCO, you know, it gains the node with a key value pair of OS is equal to Windows. So when you're deploying your workloads, you know, make sure that it has a corresponding toleration of OS is equal to Windows. It's bigger. So the schedule will basically look at the pod, look at this toleration, find out a node that has a corresponding date, and then place this Windows pod on the Windows node. And then if it doesn't have it, it's obviously going to end up on the Linux node, right? So make sure you have this toleration set up. If you don't, you know, if you try to place it on the Linux node, and obviously, you know, it's going to, you know, crash. And you can, you know, target nodes even in a more fine grade fashion, for instance, you can have, you know, node affinity or node selectors. And you can use those to say, Hey, I want these workloads to land on BWO instances. And I want these workloads to land on like my regular, you know, machine set instances, right? And once you, you know, deploy the workloads, you can go, you know, like in my case, I have, you know, I think like a bunch of Windows applications, I have like a chess application, I have like a web server application. And then if you look at the chess application, you'll see which node it's running on. As you can see, it's running on the BWO node. Same thing with the web server node, it's also running on the BWO instance. And both have been exposed as a route. So if you go to the, let's see the chess app, click on that out, you know, the app is a deal on top. And it's important to note that until hyper via isolation is something that both like Kubernetes and OpenShift support fully, you need to make sure that the Windows images you're using are meant for your build version of Windows, right? So if you're using a Windows server version 2004 builds, then you need to make sure that the container image you're using is 2004, the base image for the container. So like, to say it another way, well, actually, I just don't understand. If you look at that right there, you can see that the image that Anand's using is for LTSC 2019. And that lines up with the LTS 2019 Windows server deployed. So if you wanted to deploy Windows server 2004, you need to make sure you use the base container image of server core 2004 instead. So, you know, there's docs on that and how you can actually select Windows workloads for specific Windows versions using runtime classes. We have that in OpenShift docs under the Windows container support. So that should make it more clear. But, you know, kind of a easy issue to get into is not understanding why your deployment is not working. And then it turns out you're mismatching containers. And that has to do with the containers actually containing an OS or a kernel inside them. And that's why the images are so big. And that's why you have these little matching issues. Interesting. Okay. That's a good note to point out. Anything else? I mean, this is really cool, the fact that we're like going down this road, right? Like, and I feel like, you know, for everybody involved, like Microsoft, Red Hat, and all nine yards, like this is pretty wild that we're doing this with Kubernetes so early and kind of Kubernetes life span, right? Like, oh, we're going to be putting workloads on Windows boxes and our, you know, OpenShift clusters. It's, it's fascinating to me, right? So kudos to y'all for, you know, working together and making it work, right? Like, I'm sure this wasn't easy by any stretch of the imagination. It's been a journey. Yes. Yes. Yes. All right. Five minutes till the top of the hour, folks. If you've got questions, get them in. But it's been pretty quiet in chat for the past few minutes. I think you've done an excellent job of explaining this. Should folks, I mean, when you're in the console, the terminal still looks the same, events still look the same, everything kind of goes the same, or is there any kind of quirkiness there in the console with Windows pods? Yeah. I would say the user experiences, the goal is to have the, the same consistent UX across, you know, Windows and Linux. So as you can see, you know, if I, you know, get into a pod, I, I say get into a, you know, a Windows pod, right? You know, I look at, you know, logs, you know, it's the same as, you know, a Linux, you know, pod, right? The goal is to have the same, you know, user experience. I mean, there are a lot of things that obviously, you know, meet some work along the way. So for instance, if you look at, you know, again, every those, you know, pod, you look at, you know, metrics, some of this is, you know, work in progress, right? Whereas if you go look at, you know, the metrics for a Linux pod, you know, let's take a, let's take, I don't know, the API server, you see that, you know, a lot of the metrics is, you know, flowing in, right? So, you know, that's work in progress. You're trying to make sure that the UX for monitoring logging, logging is already there in place. You just saw, you know, logs. In fact, even when we were using the WMC operator, we saw the log. So logging is, I would say, fully integrated. You're working on fully integrating the monitoring piece that should, you know, hopefully happen soon. And you'll see all the node graphs, all the pod graphs for Windows nodes, just as you would for Linux pods and the logs nodes. So cool. All right. Fantastic. Yeah. And other things are consistent, right? Like you saw, you know, the on the networking side. Yeah. Once you expose the Windows pod as a service, you know, you can go look up that service, you can go look at the external endpoint, and you can obviously hit it. There you go. And yeah, so everything else is, you know, pretty consistent, right? Nice. Nice. Well done. Yeah. There are gaps, you know, on the monitoring side, metric side that we're trying to, you know, work on, but eventually, you know, we get there. Nice. Awesome. Well, thank you both so much for coming on. Thank you everybody for tuning in. Any last second things you want to go off your chest, Sebastian? Yeah, I mean, like Anand said, like we're, you know, don't expect everything that you're used to on Linux to be there. But, you know, we're working our way up, you know, getting the most important functionality out for users first, and then, you know, some of these just minor user experience things will come soon. Yes. Awesome. Sounds great. And you can always go to the OpenShift topic page, which is openshift.com slash learn slash topics slash windows hyphen containers, though, you know, put in the URL in the chat. Yeah, I can drop a link to learn so right now. Yeah, that's kind of our, you know, landing page for the latest and greatest updates. And obviously, you know, Chris, whenever we have something new to offer, you know, you're sure this place we will come show it off. Come back on show it show it off to us. Yeah, this will be a fun journey, I feel like. I can't wait until I'm intermixing the Linux and Windows knows that that will kind of thing, you know, one day maybe we'll get there. It'll be just as easy to do both. Yeah, that's the goal at least for me. Awesome. Well, thank you so much. I appreciate your time today. And folks tune in tomorrow for DevNation at noon. DevSecOps is the way at 2 p.m. Eastern. And then we'll wrap up the day tomorrow at 4 p.m. with the StackRex office hours. We're talking about community or community Kubernetes network policies, which are always hot topics. So please tune in tomorrow. Thank you, Anand. Thank you, Sebastian. Really appreciate your time. And we'll be signing off for today. So stay safe out there, folks. Until next time. See you. Thank you, Chris. Thank you.