 Hello, everybody, and welcome to the Cubicon presentation on Simplify Windows Runtime and Deployment in Kubernetes. We have the team from Siegwindows here that's going to present to you today and go over some key topics that we want to basically get the community to know about. Let's start with introductions. Maz. Thanks, Mike. I'm Maz. I'm a Siegwindows contributor and a senior PM at Microsoft. Hey, folks. This is Deep Debroy. I'm an engineering manager over at Docker and deck lead in Siegwindows. Hi, I'm Mark Rossetti. I'm a software engineer in the Azure org at Microsoft and one of the Siegwindows co-chairs. Hi, everybody. My name is Michael. I'm a Siegwindows co-chair along with Mark, and I'm a director of product management at VMware. Hi, everybody. I'm James. I'm at Microsoft and an engineer there and also contributor in Siegwindows. Thank you all. So today, we're going to talk about all the investments that Siegwindows and the cloud native Windows community have been doing in the last few months. We're going to basically talk to you about our future roadmap and what that looks like and talk about some of the key areas that you might be interested in. In addition to that, we're going to spend a little bit of time talking about modernizing legacy applications and what that means for you. We're going to give you a dynamite demo on cluster API and then also leave some time for Q&A. Let's dig in. Mars, go ahead and talk to us about Windows. Thanks, Michael. So in Windows operating system updates, we have a lot of exciting stuff. Starting with Nano Server, 1809 support has been extended officially to 2024, same as the Windows 2019 LTSC support cycle. This was based on the request we got from Kubernetes community. The other one important that Mark is going to talk about in continuity as well as the single file mapping is now fully supported. From an operating system perspective, the bug has been fixed. The DSR support, long-awaited, is going to be in October release. That's a huge one. And then the last one is 2004 support is for Windows Server, the latest SAC release is now fully supported and is being tested in the test grid for Kubernetes. As far as the plans for V120 goes, we have a couple of critical bugs that were filed by the Kubernetes community that are being resolved, for example, virtualized time zone for containers. And in future, based on this relationship between the Kubernetes and Windows OS team, we're looking at the changes for service mesh that Michael's going to talk about, session affinity for Qproxy, and especially MSMQ, MSDTC, supported scenarios in future. With that, I will hand it to Mark who will talk about Container D. Hi. Yep, so anybody who's been following SIG Windows has noticed that in the compute space, recently we're really just focusing on our efforts on Container D. Container D as the container runtime provides a lot of benefits over Docker, at least with the current Windows implementation today. So go over a couple of those real quickly. The first one is that it utilizes the host compute service V2 schema for those unfamiliar with Windows. The host compute service is a management service that just helps start and stop and run containers. The host compute service V1 introduced with Windows Server 2016 and it was revamped with Windows Server 2019. Some key benefits that that brings to the Kubernetes community is it allows us to address some kind of parity issues with features in terms of, with respect to Linux containers. Some of those are highlighted here and that's with this new schema, we can support, properly support single file mappings so you can get an updated at CD host files into your container and also containers can just write to dev slash termination log to get their error messages. In addition to the Windows specific functionality, Container D is based fully on the cry of the container runtime interface. This means that when there are updates to the cry interface in general, Windows will just automatically get them if you're using Container D as your container runtime. Some specifics for what we're planning on doing in 120 is we're pushing Windows Container D support to stable. That includes a couple of bug fixes, a lot of documentation and addressing a couple known kind of feature parity issues. The big one being group managed service accounts for everybody who wants to run a Windows cluster that's pretty important for security. Another kind of important one is we're working to enable GPU support through device assignments. And some future things we're investing in but not necessarily in 120 in this space are, we hope to enable privileged container support for Windows in the near future and are also working on enabling Hyper-V isolated containers. Next is Deep who is going to give an overview of storage. Thanks Mark. So we have been mainly concentrating on the CSI proxy in order to enable CSI support for note plugins in Windows. We have introduced the new system API group in CSI proxy. This involves adding support for a bunch of things around querying the status of the service that's running within Windows as well as enabling support for iSCSI. We have introduced several new APIs to the existing desk and volume API groups. This is mainly to support operations like resize and online offlining of disks in accordance with the systems and policy. We are starting to support CSI proxy as a native Windows service so that you can just configure it as a service when you set up the Windows host. And finally on the CI-CD side of things, we have enabled GitHub action based unit and integration tests that are fairly stable at this point. Plans for upcoming plans for 120 is to continue to add APIs to CSI proxy so that we can support the vSphere plugin as well as the generic iSCSI plugin on Windows. We want to analyze and improve some of the latencies that we are finding when performing Windows operations such as partitioning and formatting volumes. We want to improve on those latencies and figure out what's causing them. We want to introduce automated API documentation and generate them directly from the API protocols so that we can continue to publish documentation that's fairly up to date on the CSI dev documentation sites. And finally, we want to investigate a smooth and seamless migration path from the current CSI proxy to the future which is privilege container support in Windows. So beyond 120, the future would be to take, in the future, we want to take CSI proxy to a stable state, add more storage plugins that can be supported through CSI proxy, for example, the AWS EPS plugin, and finally look towards deprecating the entry storage plugins that target Windows today. In the next slide, we have architecture diagram of how CSI proxy enables a CSI node plugin in Windows to communicate with the various components such as Kubelet and the CSI node driver register as well as to the host operating system to drive privileged operations. If you have questions around this, feel free to ask us in the follow-up or jump into the CSI Windows channel in Slack. Thank you, Deep. So let's talk a little bit about networking. You know, we've been seeing a lot of advancements in storage as well as compute and we're actually putting a tremendous amount of effort around networking as well. In the latest release, we've enabled DSR mode for low balancing as well as endpoint slices. Both of these features make it easier for you to run more applications on the same node in Kubernetes and we make it more efficient for you to have more endpoints on your Kubernetes containers. In addition to that, there's two major changes that are happening on the CNI world, the container networking plugins for Kubernetes. Calico has open source, their network plugin for Kubernetes for Windows which is a major, major thing. Now you get the great advantages of Calico and you can try it out without requiring a subscription. In addition to that, the Antria CNI now supports Windows including support for network policies. So now if you're looking to get started with Windows containers on Kubernetes and you're looking at your network plugins and what can support your needs as an infrastructure operator, you have two great CNIs that are made available for you. In addition to that, the Envoy Proxy which is kind of the base for a lot of other features in the cloud native ecosystem, for example, Istio and Contour and other Ingress controllers had their alpha release of supporting Windows. This is huge. Now we make it possible for you to start trying out what it will mean to run the Envoy Proxy and some of these capabilities like Ingress and another tools on top of Windows. For 1.20, our plan is to promote Envoy Proxy to beta so you're gonna see an advancement of the capabilities that are making it more solid as a supported offering. We're gonna have IPv4 and IPv6 dual-stand networking. So that's gonna require some of the Windows releases, Windows release features in version 2004. And then as well as external traffic policy equals local support for client IP preservation. In the future, like Maas mentioned earlier, we're gonna have service mesh support. So look for support on OSM, SMI, as well as others, as well as IPv6, IPv4 dual-stand networking for overlay networks. So lots of advancements here. And let's move on and with James on cluster lifecycle. Yeah, so for the last few releases, we've enabled you to add Windows nodes. And we were working towards making this even easier for you to add the Windows nodes to your clusters. So in a couple of releases ago, we added QBADM support and we are now in beta with that. And as we look forward, we're gonna be adding support to cluster API. Cluster API is gonna be using the QBADM support that we did that we added. It'll also be using some tooling that's out in the ecosystem around cloud-based that enables QBADM to boot these nodes very quickly. This will be support for workload clusters only. So that's important to recognize that there's, in cluster API, there's a concept of management clusters. That's what creates all your target workload clusters. And so we'll be supporting Windows in the workloads clusters. And as we look forward to the future, we're gonna be moving towards QBADM support for GA. And so I'm gonna be doing a demo here in just a little bit on how we've started to add support to cluster API for Windows. Thank you, James. And we'll look forward to that demo. Maz, we've talked about all of these features, compute network storage, making it easy for people to run and our users to run Windows containers and Kubernetes in production. But the big issue is, what do you do with all these legacy apps that are running Windows out there in the wild? Yeah, thanks, Mike. And that's one of the feedbacks and questions we got from in the last Q-Point as well. So I think that the first thing to start with the modernizing legacy application is of course to lift and shift your application from monoliths to microservices. And the first step is pretty simple one. All you need to do is basically try to containerize the application locally on your own computer using Docker containers and see if it works. And the general rule of thumb here is the web-based applications, the simple.NET application that's been running the legacy application with WorkGrid in your organization, you wanna keep it and modernize the infrastructure, start with those, the low-hanging fruit. And generally we've seen a majority of the application lie into that. The complex application leave them for later. Now, in terms of containerization of applications locally, there are many tools out there as well to help you write the Docker file, getting the first step going and testing it on your local machine. One of those is the Windows Admin Center, which are a lot of Windows administrators use. We have a new extension called Container Extension and the link is right there. You can go and watch the video. It's a pretty cool tool to containerize your application locally, test it and then push it to a registry to get it started. The last thing I wanna call out before I hand it back to you, Michael, for more gotchas is the importance of Linux nodes, especially when you're starting. If you're starting with a basic Kubernetes cluster and once you have containerized your application now, you need to deploy it to your Kubernetes cluster. Make sure that your control plane and the Linux node, which has DNS and other key components, you pay attention to it because often you forget about them and they're pretty critical to run Kubernetes today. If you have any more questions, we're here at Kubernetes Slack. There's a lot of documentation in Kubernetes website as well on Microsoft's website as well. Feel free to ping us. With that, Michael, you wanna talk about the gotchas? Absolutely. Thank you. Thank you, Maz. Like Maz mentioned, the best thing to do is you start slow. You start with some applications that you lift and shift and as you get more knowledge about how to modernize some of those applications, as you understand some of the gotchas, then you can slowly apply the same blueprint to more and more applications because there's a lot of classes of applications out there at organizations that look very similar. So the knowledge that you're gonna get from modernizing some of them is gonna apply very equally to other ones. But as you go through that process, you're gonna identify some things that don't really work as well and some things that you have to note into your, as you're going through that process. So the first one is image selection. Use derived images to share application building blocks and dependencies. Why is that important? You don't need to figure out how to put IIS into a base Windows server image. Start with the IIS derived image. Same thing for Python and other programming languages. Like leverage the power of Microsoft that produces these base images for you with all these applications in it so you don't have to do that work. They're gonna make sure they're patched, they're gonna release them frequently and then you can just build on top of it. In some ways you're becoming operating system independent by doing this. Be careful of Windows registry storage. A lot of legacy applications use Windows registry as a storage. You have to figure out the way to redirect that output into persistent volumes or other Kubernetes friendly storage providers so that you can maintain that storage and that ability to access that data as your application could be shut down or move from node to node in Kubernetes. Same requirement applies to local storage. So not just registry but local storage as well or usage of local databases. Kernel drivers or application drivers may not exist in Windows containers. So if your application is making use of those, you have to identify them, figure out if they're necessary and either split the application into different components where some components could still run in virtual machines or completely rearchitect those components. Active Directory support at the OS level does not exist in Windows containers. You have to think about that and use GMSA that Mark mentioned earlier as the ability to reach out from the container into Active Directory assets. Be careful of .NET version compatibility. WCF, for example, may not exist in .NET Core. Database requirements, certificate management, other application dependencies, like for example, Windows API dependencies like MSNQ and MSDTC. If you have old style applications using COM and they require the distributed transaction coordinator, that's something to think about. Do you have source code for the applications? We've dealt with many users that don't even have source code. So maybe you have to use ProcessMonitor and Fiddler and other tools and kind of lift the application from IIS and put it in a container. And trust me, a lot of times that really, really works and you can run your app in a container. Use the .NET portability analyzer to evaluate your workloads and identify all the libraries and basically requirements and dependencies that they have. It's a great tool to give you a full view into your application workloads and also be aware of OS patching and how that could be applied because it's different for containers than for virtual machines. So containers, you wipe it out and you restart new while virtual machines, you get to patch them and maintain the same instance over time. So with that, if you have more questions, come and find our entire team is available. We're all knowledgeable in this. We've done it many, many times. Come find us on Slack as well. Now we're gonna move into the demo. So I'm gonna stop sharing and let James to share. James, let me make you a co-host here as well. Go ahead, James. All right. Can you see the screen here? Yes, we can. I think I need to... Do I need to do side by side here? No, you're good to go. Go ahead, James. Excellent. Okay. So I've got a video here of using Cluster API and Image Builder to start building images. So one of the challenges is just building that base image with all the best practices baked in. And so there's a repository out there called Image Builder. And you can run a simple command and it will kick off building an Azure VM as well as several other types of VMs. And then it will begin to install all of the components that are required for Cluster API. So you can see here, it's installing, making sure cloud-based init's installed, making sure automatic updates are turned off, making sure all the base images are installed, whether you want container D or Docker EE installed, and then configuring all those things and getting them ready. At the end of this, you're gonna have a fully functional image that you can then deploy with your Cluster API providers. So you see it ran through. I've got this speed sped up so that it moves pretty quickly here. And in just a moment, it's gonna be finished with deploying the image and we can turn that final image into a sysprep ready to be deployed across multiple resources. So at the end here, you get a disk URL that you can then reuse. And then what I'm doing here is transitioning over to the Cluster API. So Cluster API has a concept of a management cluster. And so what I'm showing you here is the management cluster. And then once we have the management cluster run, we then deploy the workload cluster. So there's many different templates out there that have different types of CRDs. We're gonna deploy those out and it's going to begin to create the workload cluster. Here is the management cluster logs. These logs are just showing you that the workload cluster is actually being reconciled as it gets deployed. And then in a moment here, we're gonna see that the first control plane node came up. Now this is a Linux control plane node. And on the right hand side, now that that control plane node has been deployed, I can connect to it. And on the left hand side, I'm gonna query the CRDs for the management cluster and watch these components come online as Cluster API begins to provision each of these VMs. On the right hand side here, I'm now talking to my cluster, the workload cluster that's been deployed. You can see that the nodes have come online. So we've got the Windows nodes deployed and we've got a fully functioning cluster here. So we've got all of the management cluster components that's deployed. And now we can deploy our extra components on there. So we'll deploy out the... Let's see here. So we're gonna deploy out our flannel. We'll deploy it to the Linux. Once those are all up and ready and running, we can then deploy flannel using for Windows. And so now we can see that the Windows nodes have come up. And finally, now that we've got that up, the flannel host network is completely set up. We'll deploy Qproxy to the cluster. So here we go and deploy Qproxy. Qproxy is going to take a few minutes to come up. Those containers will be created and now we've got a fully functional cluster across that can deploy Windows and Linux nodes. So the next step that we'll do is we'll go out and deploy two different workload components. So we'll deploy an IAS application and then we'll also deploy a helper pod so that we can see communication happening back and forth between those pods. So we're just switching over to default just to deploy out the IAS component. And then we're gonna deploy out a multi-arch image that's going to be our helper test component. So we can see that they've come up online here. And we've got IP addresses and here we have the IAS service. We're gonna go out and actually make sure that that got provisioned correctly here. So copy that IP address and open up a browser and here we've got IAS running on that workload cluster. And then the last step of this is exacting into that test component. We're gonna query the IAS cluster and we'll also query out, make sure that we have internet connection out to an external site. And finally here we'll see that we've got all of the pods and at the bottom here you'll see that we have Windows Server 2019 with Docker EE running on it and we were able to do query and deploy all those pods to it. And so this is an example of using Cluster API Azure to deploy Windows nodes. I'll hand it back to you. What a dynamite demo. Thank you, James. And the entire team both from Sieg Windows as well as from the Sieg cluster lifecycle that's been working on this. You know, it is super, super exciting for me. It's gonna make it easier for IT admins and operators to deploy Windows clusters by basically defining just a yaml for their clusters. We're giving us a specification and then we do all the work and all the plumbing to get their cluster up and running based on the definition and the desired config. This is great. And this is gonna make operation so much better. And then, you know, top of that now we're gonna make it easy for you to create your cluster. So now you can modernize your apps and run Windows containers on Kubernetes. So as you've seen, you know, we have a tremendous amount of innovation happening in Sieg Windows. And one of the things that Mark and I have talked about in community in the past and in one of the blogs we've created, come and join Sieg Windows. It's one of the few unique opportunities you have in Kubernetes to work across the board of the entire Kubernetes subsystem. You can work in storage, compute, networking, API, cluster API, we'll have everything. So if you wanna really touch every Sieg in Kubernetes, Windows is one of those areas. We have weekly meetings that are all recorded. So you can go and view all the recordings from past meetings. If you wanna come in and help us write documentation and user stories or fix some bugs starting with the good first issue or even review or open PRs, we're a welcoming community. We wanna need to come in and contribute, we'll mentor you, we'll help you get started. I'm gonna leave this slide up. This is how you can engage with our community. We have our IDs on the left, on GitHub as well as Slack. We have our channels, our mailing list, the documentation that we have, our YouTube playlist, as well as our Zoom link. I wanna leave the last 10 minutes or so here for Q&A. Thank you all and we appreciate your time spending with us today.