 Hello and welcome everybody to another OpenShift Commons briefing. My name is Karina Angel and I'm on the OpenShift product management team. And today we, well, there have been so many people that have been waiting for this, right, Windows containers on OpenShift. So that's why we have Anand here and our event. And can you introduce yourselves and I'll just let you take it away. My name is Anand, product manager for the Windows containers on OpenShift offering. And with me is Arvind. Arvind, would you like to say a few words? Hi, I'm Arvind. I'm the Windows containers lead engineer slash tech lead for Red Arrow OpenShift. So the agenda for today is obviously Windows containers and as Karina and Chris mentioned, this is, you know, been a very highly popularly requested feature. And we're glad to let you know that this G8 a couple of weeks ago in December, and you can now try Windows containers on OpenShift on AWS and Azure with other platforms soon to follow. For the next 45 minutes or so, here is the agenda we wanted to talk about. A brief introduction to Windows containers, a technical overview of the Windows machine config operator, how you can schedule different types of workloads on OpenShift, including .NET workloads, which includes .NET framework and .NET core, how we differentiate against other Windows offerings out there in the market. And then we will wrap the presentation with a peek into what's coming in the road map, right? We G8 a couple of weeks ago, and then, you know, we will continuously refresh the operator every couple of months or so. So we want to give you a peek into what's coming in the next three, six, nine months. And then plenty of, you know, Q&A, plenty of time for Q&A, hopefully at the end, and we'll point you to all kinds of resources that's available. And feel free to, you know, pop in a question in the chat, you know, Karina and Chris and Irwin will help me, you know, look at those questions and feel free to interrupt me as well. Having said that, why Windows containers, right? Windows Server still enjoys a significant presence in the server operating system. I think about 50% of the market share for enterprise OS is still Windows. And on Windows .NET is a widely used programming language. If you look at RedMonk, if you look at a lot of popular programming language surveys, .NET, like, you know, VB.NET still ranks, you know, highly amongst those surveys in terms of the choice for programmers to use for application development. And traditionally, Windows has, you know, remained largely independent of Linux in its own, you know, island, right? And that did not enable a lot of Windows native developers to embrace cloud-native technologies like microservices and containers. You know, when I say cloud-native workloads on Windows, it almost seems like a paradox, right? Because of that. But if you wanted to get, you know, Windows to a cloud-native world, if you wanted our older Windows customers to, you know, make the big leap of faith to public cloud and do cloud-native development, we will need some kind of a containerization strategy for those, you know, Windows workloads. So those customers can, you know, lift legacy workloads, they can containerize, they can strangle the monolith and decompose it to smaller microservices, and pretty much get the full benefits of containerization that our counterparts in the Linux ecosystem get. Benefits of Windows containers, obviously that, you know, addresses some of the pain points mentioned in the previous slide, but the first and the biggest benefit is, you know, this serves as a bridge for legacy Windows customers to adopt public cloud or hybrid cloud for that matter. And we'll talk about the different options you have from going from, let's say, running older Windows, you know, 2012 or newer Windows 2019 to a more modern cloud-native strategy. Next benefit is pretty much the benefits of, you know, general containerization. You get application portability, which means once containerized, you can run it, you know, the platform of your choice and the infrastructure provider of your choice, obviously get more agility because you can release faster and more often, you get more control of the, you know, the container infrastructure. And last but not the least, moving from a VM to a container-based world, obviously you're going to be reducing infrastructure and management costs. Question is why do we need Red Hat for running, you know, Windows containers? And we'll talk about some of the differentiation for Red Hat Windows containers, you know, towards the later part of the slide. But one of the things I wanted to point out was with Red Hat Linux, with Red Hat OpenShift for Windows containers, you can co-locate Windows and Linux containers in the same cluster, which means both Windows nodes and Linux working nodes can be happy citizens of the same cluster, communicating with each other. And you're trying to build a first-class management experience for Windows containers as well. So for instance, if you go to the OCP console, you know, manage your Windows pods and applications and services from there, you're trying to get a similar experience for Windows as well. And obviously, you know, Red Hat OpenShift is supported on a wide variety of platforms, you know, public clouds, private clouds, open stack, bare metal, you know, you name it. And once you layer in the Windows goodness and top of that, you have a truly hybrid cloud offering that, you know, really differentiates in the market. Here is the lay of the land and here is how the offering is placed. So like I mentioned, you know, Red Hat OpenShift is a truly hybrid cloud offering. It can run on physical servers like bare metal servers, can run on virtual servers, you know, v-spear clusters, can run on private clouds, can run on public clouds, AWS, Azure, whatnot, can run on public clouds as a managed offering, right? For instance, on Azure, we have Azure Red Hat OpenShift. On Amazon, we have Rosa, but a predominant operating system of choice for running OpenShift used to be and still is, you know, Red Hat Enterprise, Linux and REL CoreOS. What we are now adding as a part of this offering is for cluster admins to let them run Windows servers as worker nodes, right? So now you can have Windows worker nodes alongside, you know, REL worker nodes on the same clusters, scheduled, managed and controlled by the same control plane, communicating with each other, communicating with the outside world and just, you know, being happy citizens, right? And rest of the stack is, you know, pretty consistent. So all the other cluster services, layered services, you know, it doesn't, you know, change. One caveat is a lot of the, you know, other cluster services, other platform services that work seamlessly with REL and REL CoreOS today have not yet been tested on Windows. You know, that's a story in progress, that's something that we are, you know, working to harden. But a quick question could be, hey, you know, does service mesh, you know, work with Windows nodes? And the answer is, you know, probably, but, you know, we haven't got there yet. We're still in the early days of building the Windows support and OpenShift. And we'll talk about where we are in the story and, you know, where we need to get to. Here is a schematic of how you can, you know, co-locate Windows and Linux worker nodes on the same OpenShift control plane. And alongside them, you can also co-locate Windows virtual machine enabled by another offering called Red Hat OpenShift virtualization, which lets you wrap a Windows virtual machine as a pod and run it inside OpenShift. And all these three entities, you know, like I said, can be happy citizens, you know, intermingling, co-mingling, you know, happily talking to each other and to the outside world, managed with the same control plan. And the next question becomes, you know, what do you use for what? Right? How do you position, let's say, OpenShift virtualization versus Windows containers? And the quick answer to that is, if you have legacy workloads that cannot be containerized, so say, for instance, your Microsoft SQL, you know, that's, that needs to be a long running instance, obviously, you know, that might not be a good fit for container. So you could, you know, take some of those older legacy applications and just, you know, forklift them into a virtual machine and run those virtual machines on OpenShift. There are some of the newer Windows applications, right? So for instance, an IIS web server or, you know, even a .NET code application, which is more Microsoft is enabled, might be a good fit for containerization. And if it is .NET framework, you know, that's obviously support only on Windows, but if you've migrated .NET framework to .NET code, you have the choice of running it on Windows or on Linux because .NET code is supported and certified on Linux, there are Linux. It is a positioning table. You know, this is really to, you know, tell you that, you know, Red Hat OpenShift gives you a spectrum of choices for modernizing your older Windows workload. You could start by, you know, virtualization, which is just forklifting VMs to OpenShift, which is easy, low friction, but offers little benefits of containerization. Next step, if you wanted some benefits of containerization, you can obviously, you know, strangle the monolith, decompose to microservices, convert, you know, containerize some of your applications to .NET, containerize some of your applications like .NET framework, bring them to OpenShift, get some benefits of containerization, some benefits of OpenShift. But at the same time, the Windows, you know, container ecosystem is, you know, evolving, right? I mean, the Linux container ecosystem has been out there for, you know, six, seven, maybe more years. Whereas the Windows ecosystem is still catching up. A lot of the features have not been hardened out and that's still work in progress. Next step, if you wanted the full benefits of the Linux container ecosystem, you could re-architect your application, which means you could, let's say, take your older .NET framework apps, migrate them to .NET Core, and then run those .NET Core applications on REL-REL CoreOS, thereby getting the full benefits of Linux containerization, the full benefits of OpenShift, and the full leverage of a highly evolved Linux community. The trade-offs obviously being, you know, there's migration effort involved in the migration from .NET framework to .NET Core, you might need a development team, so that's going to consume, you know, time, cost, and effort. And the last step is, you know, if you're starting as a green field, if you're just, you know, you want to go complete cloud native, you don't care about Windows, you don't care about, you know, .NET, you could just start with REL-REL CoreOS, which is, you know, the most dominant operating system of choice on the cloud, on OpenShift, right? And you could just start building open source applications, right? Your Java applications, your Ruby, your Node.js, your Python, you know, your Go applications, and just deployed on, you know, REL CoreOS. You get full benefits of containerization, you get access to the latest and latest in cloud native, the latest monitoring tools, the latest observability tools, all the service meshes and the K-natives of the world. So you will be working with a highly modern stack with this approach, but again, the trade-offs is you need a development team. And if you, let's say you're an older, you know, customer who's running in maintenance mode, you may not have the luxury of staffing a development team to build new cloud native applications. So again, the takeaway from this slide is OpenShift gives you a spectrum of choices, and we're here to handhold you from one step to the next. You don't need to go from, you know, step one to step four in one shot. You can make incremental progress. At the same time, you can start at any step and end at any step, right? You can start with virtualization and end with containers, or you can start with Windows containers and end with Linux containers, or you can just start with Linux containers. Moving on, how do you access this offering? This offering is called the Windows machine config operator, or in short form, it's called WMCO. It is the entry point for OpenShift customers who want to run and schedule Windows workloads on the OpenShift cluster. It is a day two operation, which means day one, you'll set up your OpenShift cluster, you know, your default cluster, right? With, you know, three masters, three REL core OS workloads, and then day two, you will set up Windows workloads with the help of this operator. The only prerequisite being your cluster should have been built with the networking provider called OB and hybrid. If you had a different networking provider, like OpenShift SDN, you just have to make a new cluster. Here is the architecture of Windows machine config operator. It involves three steps to adding the Windows worker node. The first step involves a cluster admin installing the operator via the operator hub. So you launch your cluster, you know, go to the console, you know, navigate to the cluster operator hub and then look for the Windows operator. You will install it. It's literally a click and literally takes a couple of minutes. The next step is a cluster admin will define a machine set that has a bunch of machines that have specific labels on them. For instance, OS is equal to Windows. And then what happens next as a third step is that the operator is watching for those labels on those machines. And if it finds machines that match those labels, it's going to take each of those machines one by one. And on each of those machines, it's going to set up all the plumbing that's needed for that machine to be bootstrapped into the OCP cluster as a worker node, right? So to take each of this machines, set up all the infrastructure components like Qproxy, Hybrid Overlay, Cubelet, CNI, so it does all the plumbing work, right? And once the plumbing work is done, it joins that node to the cluster and you can start scheduling workloads on that node. And then it takes a second machine, it does the same thing, right? That's really the secret source here is, you know, the work that's been invested to make the operator functional and highly automated as possible. This is just a description of what I mentioned. This is just a workflow of all the steps the operator does, which is it transfers all the binaries, configures the Cubelet, runs the hybrid overlay networking, configures CNI, sets up Qproxy, whatnot. In terms of the platforms we support today, like I mentioned, we went GA sometime, I believe in December. And we went GA with a cloud-first approach, which means we support IPI on AWS and Azure with support for vSphere IPI coming very soon. It says ETA is Jan 21, which is most likely true because we'll most likely get support for vSphere IPI, excuse me, in the community version of the operator. And then Arvindan team are working on a bring your own host story, right? So IPI is about, you know, bringing, you know, Cattles, right? So if you have like a bunch of compute instances on AWS and Azure, obviously they have access to limitless compute instances. And you can usually, you know, provision them and deprovision them. But if you, let's say have, you know, pets, let's say you have an old Windows server running on a Dell X86 rack, you know, running in a data center, that would be considered a pet and not a cattle, right? So Arvindan is working on a bring your own host story, so you can onboard Windows pets onto OpenShift. And once that story is done, support for platforms like bare metal, red head virtualization, OpenStack should automatically fall in place. And the current expectation for that is I would say, you know, late Q1 maybe Q2, but definitely something you're going to work on this year at the top priority item. Your offerings support is not likely to come until the end of this year. So I would say watch for it more towards the end of 2021, support for Azure, Red Hat, OpenShift, Rosa, whatnot. In terms of the operating systems we support, we support Windows Server 2019. We will consider requests for other versions of Windows. So for instance, if you're running older versions of Windows like 1803 or more modern versions of Windows like 2020 H1, we will consider customer requests for those and we'll prioritize accordingly. But right now we are supported on Windows Server 2019. And if you're running an older version of Windows, you have two choices, either you migrate to this version of Windows and you run containers or you use OpenShift virtualization. And this is just a workflow of how the operator works. I think we spoke about it. First step, you install the operator from the cluster. You create a machine set, the operator watches for labels and bootstraps those machines into the cluster. Next, the operator also is responsible for upgrading all the software components laid by the operator. So for instance, if Microsoft let's say releases a new version of Cube Proxy or the Cubelet, we will actually take that newer version of Cubelet, put it through our OpenShift build system to make sure there are no CVE vulnerabilities and it's secure. There are no bad things that can happen with the Cubelet if you put it on your nodes. So we'll basically take it through our OpenShift build system, build a newer version of the operator, make the new version of the operator available in the in-cluster operator hub. And once the operator gets upgraded, the operator will make sure that each of the machines that was configured by the operator is upgraded to get the latest version of the software. Right? So say for instance, you had four machines, let's say running the 1.0 version of Cubelet, Microsoft release, let's say the 1.2 version of the Cubelet, we will rebuild an operator. The newer version of the operator will make sure that it upgrades the Cubelet on all your four Windows working nodes from 1.0 to 1.2. The actual question that begs is does the operator upgrade the underlying Windows operating system as well? And the answer is no. The end user is responsible for upgrading the Windows operating system. This we feel is something that the cluster admin should be responsible for and there is no way, you know, the Windows machine config operator can upgrade the underlying OS for a lot of reasons. And this is, you know, pretty much a common stand taken by, you know, Google and Microsoft themselves. So the cluster admin will, you know, provide an updated image of Windows and he will specify that image in the machine set and we will bootstrap that, you know, machine set into the cluster. Some of the benefits, like I said, the Windows machine config operator is really our secret sauce. It provides all the automation, all the upgrade necessary, all the life cycling necessary, all the integration with console and OC necessary for you to, you know, have a fully functional cluster. How do you go about, you know, thinking about placing dotnet workloads on OpenShift? I think we spoke about this, but let's say you have older dotnet framework apps like 3.5, 4.6. You can target Windows. You can target them on the Windows operating system. And if you have a more modern version of dotnet like dotnet core, which is more microservices enabled, you can obviously target them on both Windows and the mouse. Here are some guidance put out by Microsoft as to when to use dotnet core versus dotnet framework, when to use Linux containers versus Windows containers. So there is a red book or a book, you know, an ebook that Microsoft has put out with all the guidance on the architect, architectural choices you should be making as you go about moving these apps. Here's a simple decision tree. So say you start with dotnet framework. It's a compatible version, right? So for instance, it's, let's say running 4.7.2 that is supported on Windows Server 2019 that is supported by OpenShift. You can run those workloads on Windows nodes. If it's an incompatible version, say for instance, it's running dotnet 3.5 or 4.6. You obviously can put that to dotnet core and then run that on Linux or Windows. And then if you're starting with dotnet core, you can obviously run that on, you know, Windows Linux. Microsoft has again put out something called a dotnet API porting tool that goes in as an extension to Visual Studio and that will help you analyze your existing project for portability. So for instance, if you have, let's say 4.6 of dotnet, you want to migrate to a new version of dotnet core, you input your project, you say analyze project portability, and that will give you an assessment report of, you know, what aspects can be ported, how difficult it is going to be to do that migration. A bit of differentiators. I think the two biggest, according to me, is we support, or we intend to support a lot of platforms, right? For instance, if you have a cloud offering like AKS, you cannot run AKS on your on-prem bare metal clusters, right? With Red Hat OpenShift and Windows Containers and Red Hat OpenShift, we intend to support all the cloud, all the popular clouds like AWS, Azure, majority of the on-prem popular platforms like vSphere, bare metal, Red Hat Virtualization, OpenStack, whatnot. And so you get a coverage for a lot of platforms, you know, making it a truly hybrid cloud offering. And the second, you know, big differentiator, according to me, is the operator that, you know, Arvindan team have built out, right? That's really the secret sauce that's, you know, gives you a whole bunch of automation within a couple of clicks, you know, you're talking Windows servers, Windows Server Containers, right? And I think that to me is, you know, the real takeaway is the operator that we have built. And then, you know, obviously other secondary benefits, like, you know, you can manage your Windows workloads from the same OCP console with the same OC log, with the same OC commands, you get access to OVN hybrid, so which means you can enable cross cluster, cluster networking connectivity between Windows and Linux nodes. But that's how we truly differentiate. Again, summarizing, we support a lot of platforms, and the operator really helps you simplify onboarding workloads. In terms of the roadmap, VGA in December with support for AWS and Azure, bringing your own machine sets. In the midterm, in the next couple of, you know, three to six months, we're looking at adding support for BSpear IPI. And then once we support, bring it on host, we should be able to support BSpear UPI, bring bare metal, open stack, and other environments. We will harden out other stories like logging, monitoring, storage, moving to container D. And then in the long term, which is really nine months and above, we want to, you know, look at the scope of customer requests and, you know, see if customers are asking us for running, you know, service measures on top of, you know, working nodes, if they want to let these Windows nodes be managed by things like, you know, open policy agent and the gatekeepers of the world, if they want to run key native, you know, applications on Windows, which we haven't really got the pulse of so far. So once we look at what customers really want, we will start thinking about other platform services and other layered services that we want to support on Windows. Windows containers obviously has limitations today. Like I said, you know, this is work in progress. I think Arvind and team have done a phenomenal job of, you know, helping a cluster admin bring, you know, Windows workloads. I think that's a great first step. It's still a long journey to go. For instance, we don't support serverless. We don't support pipelines. We don't support service mesh, you know, cost management, code containers. So there's still, you know, this is still an evolving story. So, you know, watch the space pretty closely because of this limitation, we offer support as at a standard level, not at a premium level because of the reduced feature set. And if you're looking for this queue, this is the skew you need to be looking for. And you can visit our topic page here and learn more about Windows containers, get access to the latest blogs, get access to the latest videos, get access to all the latest content we put out there will all be reflected in the topic page. Last but not the least, let's, you know, let's see how this all, you know, comes together and works. So you would, you know, obviously the starting point for this is the OpenShift, you know, cluster. So you would launch your cluster and then you will navigate to operator hub. In the operator hub, you will search for Windows and you will see two operators pop up, right? The community operator and the Windows machine config operator. I want to point out a difference here. This is a community operator. So this is something that, you know, we put out as a Skunk Works project, we release it like every couple of, you know, weeks, maybe every couple of sprints and it's kind of really use it at your own risk, right? There is no level of support. You can't like really ask any questions. I mean, you can, but, you know, there is no, you know, promise level of support. Whereas the Windows machine config operator is the hardened version of the community operator. You know, it's gone through our internal secured build, you know, systems. We make sure it's, you know, fully baked. We make sure that it's fully docked, fully queued Q8. It's fully tested and it's fully supported at a premium level, right? You can raise a bug, you can raise a support ticket, you can call us, you can get access to full support with this offering. And we will refresh this, you know, as in when we land new features and as in when, you know, there are big milestones. And so say for instance, you want to use the Red Hat certified Windows operator, you will click on it. You can read about it. You can look at the pre-rex. You can see the docs for using the operator. And then once you've read it, you can say click install. You can choose your update channel. You can choose the namespace. You can choose how the approval strategy should be manual or automatic. And then obviously already have it installed. So it says already exists, but you can click on install and, you know, take a few minutes to get the operator installed. Once it's installed, you can go to the install operators and see that the Windows machine config operator was, you know, successfully installed on this date. You can click on it. Again, you know, you can look at the same, you know, sort of documentation for it. And so now the operator is installed. Now the second step is to actually create a machine set, right, which will onboard or which will tell the operator what machines to watch for. So you could come to compute the machine sets. You can say create a machine set. And here you can specify a machine set, right? For instance, let me see if I have a machine set handy here. Yeah, so you can specify a machine set where you instruct the operator to look for specific machines. In this case, machines that are Windows and that have specific labels, right? So let's say you want to add a Windows as a worker node, you'd specify, you know, the label here that says OSID is Windows. And then you'll go ahead and specify the compute shape and size of this Windows instance, right? You'll say this is a Microsoft Windows server. This is this cube to go look for. You know, which suppose you're deploying to the cloud, which region, let's say in Azure, you want to get this deployed, which network resource group you want to deploy on Azure? What's the disk size? What's the disk type? What resource group in Azure you want to get this deployed? What subnet in Azure you want to get this deployed? What's the size, for instance, is it like D2SV3, whatnot? What VNet and what AZ you want to get this deployed, right? So it's pretty much like creating a Windows instance on Azure, right? You have to go specify these things anyway. In this case, you're just automating that into a machine set. And you can, you know, copy paste this machine set. We provide you machine sets for, you know, all the providers we support. And then you can stick that machine set here and then say create. And then that goes up, you know, creates a machine set. And simultaneously as those machine set is created, the operator is watching for machines with specific labels in those machine sets. And if it finds machines in the machine set with those labels, it takes those machines and it bootstraps it and sets it up as a worker node. So in this case, I've already set up a machine set. As you can see, there's a machine set that says, you know, when worker, again, you can go see the YAML for that. But the key thing is once the machine set is deployed, the operator watches the machine set and it, you know, takes machines from the machine set and it onboards it to cluster. So for instance, in this case, you know, there are a bunch of machines that is onboarded. One has been provisioned. Because if you go look at this machine set, Windows worker, it has a desired count of one. So if I let's say bump it up to three, kill it, come back to machines, you can see that one has been provisioned. And it's the process of provisioning, you know, two more. Right. And it usually takes about 15, 20 minutes to get each Windows nodes provision because it's, you know, windows, it takes a little more time. But I do have one provision windows node. And you can see that this is a worker node, you know, this is the compute instance size and shape. It's available in AZ3 in Central US. And this has been successfully created. And you can go take a look at the corresponding node, which is, you know, the physical instance, the Windows node. So now you're ready to start, you know, placing workloads or scheduling workloads on this Windows node. So now what you can do is you can go deploy a pod, right. So you can take let's say a, you know, like in my case, I have a Windows Nano server deployed, right. The only key thing to note here is the operator taints all the Windows nodes with labels as OS is equal to Windows. So if you're deploying a pod, you need to make sure that it is, you know, having the right toleration of OS is equal to Windows. And the cube scheduler will take the spot, see that it has a toleration, find a node that has a corresponding taint, and then place this spot on that node, right. You would deploy it, you can, you know, go to your command prompt and say something like, you know, if you have, let's say, Windows Nano server, you can say, you can say, you know, OC, you know, create minus F Windows Nano server.yaml. And once you do that, you say OC get deployment, you see that there is a Windows server deployment. You can expose this deployment as a service. So you would say something like, you would say something like expose this deployment as a service. And once you have exposed this as a service, you'll say, OC get services. And you see that the Windows web server has also been deployed as a service, which means it can take traffic from the external world, from the public internet, because it will provision the external IP address. And then take this IP address. And then, you know, put in a browser. And boom, you see that it's, you know, accessing traffic from the outside. It's able to get traffic from the outside. Going back, this is the application we deployed, which is Windows web server. We exposed it as a service. So you should see a Windows web server service that has this external, you know, IP address. And you can take this external IP address and you can hit the application, right? So now you can pretty much take any application, make sure it's tainted, deploy it, expose it as a service. And then, you know, access it. I would say that, you know, kind of brings us to an end of the demonstration. You know, maybe we take some questions now. That was awesome. And we have so many questions in the chat. Since you are already in the console, I was wondering, one of the last questions is asking about storage. Can you show how or you would add storage through the Windows container? Yeah. So we do support on Azure. We support Azure disks and files. And on AWS, we support EBS volumes. There is a, and on vSphere, we will support what is supported today for storage, which is entry volume plugin. And then once vSphere moves to CSI proxy, we will support, you know, CSI proxy as well. In terms of storage, I don't have a demo right now. I can show you, but it's, you know, pretty simple. You would create a PVC claim and then mount the claim, you know, on the Windows ports and, you know, be able to write to it. There is a blog we have put out on how to mount PVC volume claims to the Windows container. And I can, you know, happily share that blog with you. Thanks. We'll search for that. And then post it in the chat. Okay, let's see. I would presume that for Windows machine sets, cluster auto scaling is available or automatic. It is one of the things in our roadmap. We haven't tested it out. But you can scale at least manually. Like I mentioned, you can go to machine sets, you can, you know, you know, increase your count, decrease your count, you can do what not with this manually cluster auto scaler is something we've not yet tested, but it is something on our roadmap. So if you look at our roadmap, you know, cluster auto scaling is, you know, definitely one of the things we want to make sure is tested and supported. All right, let's see. Support for NFS and SIFs. You know, a part of our, you know, storage story, you know, we're still hardening the storage story out. It's all, you know, walk in progress coming soon. Nice. Yeah, we'll get probably a lot of that. But I know your teams are all working on all these features really hard. You know, our event, what are some of the really excited things that or what are you excited about that you're working on? Well, the thing that I'm most excited about at the moment is bring your own host. We see a lot of questions, a lot of requests, both from, you know, on the OKD side and from our customers that they have a set of Windows pets that they want to add to the, add to OpenShift clusters as worker nodes. So that's my number one priority at the moment and that's what I'm working on and very excited to get that up. Hopefully very soon. Thanks. Anand, what's your favorite feature that everybody's working on? In terms of a feature that has been complete, I would say it's machine sets. I think the way machine sets is used to glue, you know, the Windows machines, at least on IPI clusters, I think is, I think it's magical. I mean, you literally saw me going through like, you know, like onboarding a Windows, a couple of Windows nodes onto the cluster within a couple of clicks, you know, pretty much. And I think that that's really my favorite part of the story that has been completed so far. That's my favorite feature. In terms of the favorite feature that's coming ahead, I think Alvin nailed it. I think it's, you know, bringing on hosts. I think, you know, a lot of our Windows customers are still on-prem. I think they're still running, you know, Windows workloads on non-cloud platforms like vSphere and bare metal and OpenStack and whatnot. And we really want to make sure this offering works for them. And, you know, serves them with the purpose of bringing and modernizing those, you know, workloads to OpenShift to container as a public cloud. And so the bringing on host story is, I think, the one that I'm really looking forward to. So as far as the, you mentioned modernizing, and then you also early on in the presentation, you mentioned, you know, re-hosting, refactoring, re-architecting and rebuilding, right? I mean, the different ways to bring your Windows applications into OpenShift. What are you seeing most? What are you being asked for the most? I think that, again, you know, a lot of customers are at, you know, different points. We talk to customers in healthcare and manufacturing. They still have a lot of old, I guess, Windows baggage. They're really brownfield. And they don't have a lot of, you know, development resources, especially, you know, in a COVID year like this, they're really trying to, you know, run lean and mean. And so they're looking at a combination of, you know, our first two, you know, strategies, which is, you know, rehost and refactor, right? If I have Windows 2012, Windows 2016, you know, can I easily bring into a machine, you know, pop it into OpenShift? To have a newer version of Windows like 2019, can I, you know, quickly take an IS web server, you know, drop it into a container and bring it to OpenShift, right? So these are the two most important techniques that I have seen so far, at least because, you know, I talk to customers mostly in manufacturing and healthcare. And so that's, you know, pretty much, you know, what I have seen so far. Oh, I was just going to say that part of that somebody also asked about licensing, if you wanted to touch on how the Windows licensing would work. Yeah, that's a good question. Let me see if you can bring up a document. Sorry, I have many tabs open. This is actually an internal sales FAQ, but I'll still bring it up here because I think I want to highlight something, right? So the components of Windows pricing on OpenShift involves three components, right? The first component obviously being the control plane needs to be licensed, right? So that's, you know, the first step. Second step, your Windows worker nodes needs to be licensed with, you know, Microsoft Windows licenses. And the third step is for running these Windows workloads in OpenShift, you need to license it again with, you know, Red Hat for Windows container support. And the charge for that is $100 per vCPU or $400 for, you know, an annual, you know, standard level of support. So essentially, you know, three components to the price, a licensed control plane, a licensed version of Microsoft Windows, and then a licensed version of the Windows container support. Nice. Thank you. All right. So going on to more of our questions, let's see. I know Aravind answered this one. So if you want to answer it live, would auto-scaling work for Windows pods? Yes. If it's for the Windows pods, they should work. And I've also linked to the general documentation that upstream communities has provided for what features are supported with Windows containers. And they all should work with OpenShift. And anything that is mentioned in that document, in that, which doesn't work for Windows will not work with OpenShift. We're not adding any sort of extra sauce here. So if somebody has an issue with scaling, would they submit a BZ or would that be back to, you're pointing to the Kubernetes documentation? I'm just curious. I think if someone is using the Red Hat operator from Operator Hub, they would go through whatever support channel they have for getting support from an issue that they have faced. If someone is using the community version of the operator, they would open up a GitHub issue against our WMCO repo. Thank you. There is also a question about naming standards. Did you answer that one too? Or we could ask it. All right. We call .NET 5.0 .NET-50 and then .NET cores .NET-31. Chuck's worried that some developers might see .NET-50 as a newer version. Does that go back to if they're a developer, they should know that? Or I mean, that's definitely, that's a valid concern. What do you do think? Yeah. So the publishing of those containers actually is sort of outside our wheelhouse. We don't really control or get involved in that. But it's something I think Anand can, Anand and I can take back and ask internally about what the deal is around that. I don't know off the top of my head why that sort of decision making happened. Unless Anand can address that. I'm not sure I understand the question, right? Is the question is why certain .NET core images are titled in a certain way in the registry or what is the question? Yes, the numbering. So we can take that as an action item to look into. It's a question about Microsoft images or any of the .NET core images. One point I want to emphasize is what Alvin said. Just want to up-level that and say that we do have a very tight working relationship with Microsoft, both on the business side and on the engineering side. We have built this offering working very closely with them. And so do post the question. We'll make sure we get it answered by Microsoft because this question seems to be a Microsoft question and not a Red Hat question. But since we enjoy the tight relationship, glad to get that answered for you. And on top of that, I'll also point out that this solution is jointly supported by Red Hat and Microsoft. So if it's an issue with the operator, you would raise a bug with the operator that comes to Red Hat support. Red Hat support will triage it. And if they find out that it's, let's say, an issue with the Microsoft OS, right? We will actually open a support case with Microsoft. Microsoft will acknowledge it. And if it happens to be generally a problem with Microsoft Windows, they will actually publish a fix upstream. Again, like I said, we will take the fix, put it back through our build system and provide a refreshed version of the operator. So that's kind of the high level workflow of how a customer can expect to get fixes from Red Hat and Microsoft. You will not have to call Microsoft. You will call Red Hat. We will engage Microsoft, get it fixed upstream so everybody has a fix. We will downstream it through our build system, which is kind of giving you that security and hardening and then refresh it back to you. I'm glad you mentioned the relationship with Microsoft. They've been a really, really good partner. I know you guys all have a really good working relationship. All right. Another question. Let's see. UPI installations. I know our venue did answer this, but can you address UPI user? Yeah. So we're going to treat UPI in the same bucket as bring your own hosts. So our bring your own host solution is going to work across the board. It's going to work for UPI installations. It's also going to work for on-prem. And we're hoping it's going to be a magic bullet even for other platforms that don't support machine sets. And OKD was also mentioned. So are you doing a lot of this work in OKD as well? So the way it works is we target the community operator towards OKD. So anytime we publish a community operator before it gets published, it's actually been tested in our CI environment against OKD. And then it has been released into the community operator. So we do target OKD also. Interestingly, OKD will get these features first before it hits OCP. So in some sense, any feature that we are announcing, I will always hit OKD first through the community operator. And only then we'll make its way into Red Hat operators. So if somebody wants to test it early or help contribute, would that be go to the OKD working group and engage there? What would you recommend? That would be one place to go. You can also engage us on the OpenShift Slack channels on the Kubernetes Slack. I actively monitor those. So anytime you whisper the word Windows, you'll most likely see me respond. So that's another channel. And you can also engage with us. Yeah. And you can also engage with us on the GitHub page for the Windows community operator. Thanks. OK. There is also a Windows OpenShift mailing list. And I can have that available through Karina. So all new announcements, all new refreshes to the community operator, new releases of the Red Hat operator. We usually constantly keep our mailing list notified about these latest and greatest changes. So you can subscribe to that mailing list as well. That's another way of engaging with us. Nice. That's awesome. I'm wondering if you wanted to switch your screen to maybe back to the roadmap side or one that you think would be good. All right. So everybody, we have 10 minutes left. You want to add some more questions? And no, there's another one asking about, I know you did talk about it, but on-prem installs. Yeah, on-prem install, like Arvind said, is going to fall under the umbrella of bring your own hosts. And Arvind, maybe just since we have 10 minutes, can you spend a minute or so just talking about the high level design for how you're thinking about the bring your own host problem with config maps? I think even if the design is not set in stone, I think it'd be good to share with the audience what our thought process is for that. Sounds good. So what we're trying to do is we're trying to make it as easy as it is possible for folks to onboard their existing Windows VMs or Windows instances in their data center. So the way they'll do this, the way they'll express this intent of adding these instances is by creating a config map inside the Windows machine config operator namespace. In the config map, they'll specify the username for accessing those Windows instance and the IP address. And what we would, at that point, suggest is this instance, this Windows instance, needs to be configured with the same private key that we are using for our machine set installations. This will allow the operator to SSH into that machine and set it up the same way it's doing for a machine that has been created. So this way, the customer doesn't have to give us too much information. There is no need to store, like use it, like passwords anywhere. We'll just reuse the same private key and we'll, of course, take feedback. If customers come back and say we would like to use different sets of private keys for different instances, we'll take that feedback and in upcoming releases, we'll have a way to specify specific private keys for specific instances. So at the moment, our intent is to make it as simple as possible. Just drop a config map in the namespace, the operator will watch for this config map, we'll try and then access instances specified in there using the same private key and configure the Windows nodes. Parvind, we plan to support both DNS and static IPs or are we starting with static IPs first and then doing DNS later? So we are sort of ambivalent about it. As long as your cluster supports those DNS resolutions, you can use DNS. If not, you can use static IPs. We really don't have a reason to support, not support DNS, but we will not do anything to enable that. The cluster setup should have DNS resolution working properly for like any Linux worker or any Windows worker and then this should also work. Very cool. That's good that you mentioned networking because we did have a question asking about if routes are available for Windows hosts. Yes, they should be available. Routes and ingress should be available. Load balancer. Now for everybody else, this isn't the only Windows session that will be done, so obviously we're getting all kinds of ideas on what would be other good areas to dive into and sounds like definitely bring your own host will be a great session to dive into. So if you also have other wish list items and I see that we're being asked for the links in the chat and Bruce will have those available after the session. So look in the OpenShift YouTube for the session. We'll add them into the recorded video of the live stream. All right, let me see. Slides, please. What's that? Slides. Slides. They'll be posted after. Okay, thank you very much. Yeah. All right, let's see if we're also asked about, okay, if they can use vSphere Windows containers at the end of January for vSphere. Now, Anand, I remember that you mentioned that it is for the community operator in January. Is that right? That is right. So vSphere IPI should be supported through the community operator. I would say pretty soon. You know, I want to say like really soon. And then once bring your own host story that Alvin is working on is done, vSphere UPI will also be supported. Yeah, so watch for us. You know, like I said, you know, subscribe to our mailing list as soon as the vSphere IPI is available through the community operator. You know, we'll send out a notification and then we will let that harden out for a couple of weeks and then we'll support vSphere IPI on that it had operator. And then once bring it on host is available, we will let you know as well. I also want to call out that vSphere IPI is supported for four, seven clusters. It's not been enabled for four, six. If there is a huge customer demand for enabling them in for four, six clusters, let us know and we'll look into it. But at the moment, it's targeted for four, seven. So if you're using a OKD release, you'll have to wait for the OKD four, seven release for it to work. That's a great point, especially for everybody that if you're using the four, six, EUS release, you want to stay on it for a bit, definitely let them know whether you need that support. All right. And now do you have handy, and if you want to close out the presentation to grab the link to the mailing list. So at least we can tell people right now to sign up. Let me just go to my email. Give me one second. Okay. That way we can close out on, hey, everybody just sign up for this mailing list and make sure that you are getting all the updates. And then also go find Arvind on the Kubernetes OpenShift Slack and he will answer all your other questions too. Or at least go say hi to him there. Okay. Now I know where to go find him too. He wants to send Corina, I'm just reaching out. Thank you. And Chris, are there any other questions in the chats? Those are a lot of great questions. Thanks for doing that. No, I think you covered them all. So if someone feels like they're not answering or haven't had their question answered, please let me know. Yeah, we tried to go sometimes with so many questions. So the real rebuild option is completely Linux. I did miss that one. Arvind, do you want to do one last quick one? Yeah, I don't follow that question. I don't understand what it rebuild. We definitely don't do OpenShift Builds if that's the question. OpenShift Builds are not supported for Windows containers. And I think Anand called it out in one of his slides. If you're still on, do you want to clarify? Okay. And any ETA on OKD 4.7? When is that coming up? That's a question for the OKD team or put the mailing list for Windows. All right. So Anand just posted it. It is openshift-windows-redhat.com and do they just send an email to that saying subscribe if they are not? That's right. That's right. So yeah, if they just send us a subscription request, I'll make sure that I add it to the list. And yeah, they should start us with notifications. All right. Well, with one minute to go, thank you both so much. That was awesome. I know everybody's probably shocked that it's finally out the door. We have Windows container support. Now it's just trying to keep going from here, right? So thank you. And until next time, thanks, and Chris, if you would like to see more.