 Hello everybody and welcome to KubeCon North America 2021. This is the maintainer series talk for SIG Windows and let's get started. First, I'm going to introduce all of the speakers. I'll start with myself. My name is Mark Rossetti. I am a software engineer at Microsoft. I am currently the co-chair for SIG Windows. I work in Azure and there is my GitHub and Slack handles. Next is Danny Cantor. Hello, hello. My name is Danny Cantor. I am also software engineer at Microsoft. I'm a member of the Container Platform Team within Hyper-V. My GitHub handle is D Cantor, a little pun on my name, and you can find me at Danny Cantor on the gate slide. Hey, I'm Jay, JayUnit100 and I hang out with Mark and James and Friedrich too. I'm SIG Windows lead and the tech lead for Kubernetes on Windows at VMware. Yeah, someday I think the coop proxy for Windows will be cleaned up and move somewhere fancy, but we're not sure yet. Hopefully the Kaping project and I'm JayUnit100 on Twitter. Hey, everyone. My name is Brandon. I'm a PM on the Microsoft Container Platform Team. I pretty much deal with all things containers and Windows, so ranging from Windows Sandbox to Windows Server containers and onwards. Hi, my name is Friedrich Houten. I'm a software engineer at SAP Hi-Res, currently working on the Acuma platform, and you can find me on GitHub or on Slack. All right. Thank you, everybody. Here's a brief overview of the agenda. We're going to be talking about what's new in SIG Windows. Going to give some updates about Windows Server 2022 and then talk about the new Windows developer environment that we've been working on to make it easier to spin up Windows clusters and then talk about host process containers. So here's a brief overview of everything that's happened since the last KubeCon talk. The first is CSI plugin support for Windows is now generally available. There's a link to the enhancement for anybody who's interested. This means that all CSI plugins now are compatible with Windows nodes. Next is host process containers, or as most people prefer to them, the equivalent of privileged containers on Linux hit alpha in 122. This is a huge milestone for Windows. We can, and it enables us to support many other scenarios. Next is the, for 123, we're pursuing a number of different caps. The first one is a way to identify Windows pods at API admission time. This will make it so that policies can choose to enforce specific settings on pod specs for Windows versus Linux pods, and most importantly adds a new OS field to the pod spec. A lot of people have been asking for this, we're excited for this to finally happen. And the last one I wanted to call out was there's an enhancement that we are also pursuing for alpha where that will make it so that you can view the Kube, or the node logs with Kube CTL logs. This is going to address some feedback we've had that it's kind of hard to debug issues with Windows systems services on Kubernetes. All right, I'm going to hand it over to Brendan to talk about what's new for Windows Server 2022. So with WS 2022, you get an enhanced container platform. You can achieve faster download and startup times with the streamlined server core container image, which is smaller by about a gigabyte. And you can also run more applications than ever before in Windows containers with improved app compatibility and a new server core container image. You can run globally scalable applications with virtualized time zones and run apps that depend on active directory without them domain joining your container hosts using group managed service accounts. And alongside this, you can deploy a consistent network policy with Calico across hybrid Kubernetes clusters. So alongside this, we have included many features available in previous SAC releases into this WS 2022. So all of those features that were available in SAC are now available in a full LTSC. And we've introduced a new Windows Server based OS image which increases the app compatibility framework available to container developers and have all of the new container images available on the container image sites. Alongside this, we're improving application compatibility and compatibility between hosts and container images with process isolation. So basically what we're allowing here is the ability to run a Server 2022 container image on all versions of Windows 11 and Windows Server 2022 through the next long-term servicing channel release. So basically if you build a Server 2022 container image now, it should be able to run on your Windows 11 hosts or your Windows Server 2022 deployment up through Windows Server 2025. After that point, we'll add a deprecation scheme. So the Windows Server 2025 image will be able to run on the LTSC afterwards. This allows us to basically change the APIs between user and kernel mode across LTSCs, but still ensure that you can run your LTSC 2022 image for a long period of time, longer than was previously available. And alongside this, we are deprecating SAC releases. This is supplemented by the host and container image compatibility that I just mentioned. So your Windows Server 2022 container image will be able to run anywhere going forward and won't depend on a updated host. So I'm gonna hand it off now. Oh, yes, we have some more information available here on Windows Server 2022. Feel free to take a look at any of these blog posts to gain more information or contact us if you have any specific questions. So let's hand it off to the Windows Developer Environment now. Yeah, so we, on the community side, we have a bunch of cool stuff that will sort of, I think sort of kickstart people, especially in the Kubernetes community, people that are developing, people that know Golang, that understand Kubernetes, that wanna get involved and help out, making Windows Server a really Kubernetes native. Sort of enabled extension to Kubernetes, right? And so at the node level, so we now have a repository, it's github.covernetysig slash sig windows, sig-windows-dev-tools, and we don't have the URL here, but that's fine, I'll show you what it does. You can go in there and you can clone it down and it has a few couple of YAML files and you can modify those and you can run Vagrant up and then you can pick your CNI and it will compile the KubeLit from scratch, it'll compile the KubeProxy from scratch. So anybody who's working on Kubernetes, new Kubernetes features, et cetera, wants to try things out, for example, host process containers or whatever, you can grab a bleeding edge API server and you can patch the KubeLit, get it working with any kind of CNI, including network policies on either Andrea or Calico and minutes. And so there's a lot of things you could do with it in the bottom left-hand corner, you could see all the different things you can play around with Friedrich, we'll go a little deeper on that next year. Go ahead, Friedrich. Next slide, please. So let's take a look how we do it. Again, we have a YAML file that allows us to set certain things. For example, you can set the Kubernetes chart, so you can copy paste from GitHub your chart for the latest version of Kubernetes, set the override window spins on true. Now you have cloned the repository on your system. So on your host system, you find a make file and that allows you, if you want to, to compile all the binaries, the KubeLit and the KubeProxy for the Windows machine. The make file then starts a regular file, dispens up a Windows virtual machine, a Linux virtual machine. It starts a Kubernetes cluster. It takes the token. On the flyer, writes a KubeJoin PS1 script that starts the note on Windows that automatically joins the cluster. And after that, we set up, yeah, KubeCTL, your CNI plug-in. In this example, we use Calico, so you get your Calico part, you get your Calico services on the Windows node. And yeah. So in the end, you get a cluster built from source. You can have the bleeding edge patch level, if you want to. And you can have production grades, CNI solution set up for you at the moment if you're off on Andrea and Calico set up for you. Next slide, please. And here's a link that Jay was reciting. Oh, yeah, we do have the, so we do have the URL. Okay, great, there it is. Go check it out and try out Kubernetes on Windows, please, thanks. So next is Brandon on host process containers. All right, so in Alpha and 122 of Kubernetes, we released a feature called host process containers. This feature's been a long time coming and it enables a lot of new scenarios for Windows container users. So basically host process containers aim to extend the Windows container model and enable a wider range of Kubernetes cluster management scenarios. So what we have here is basically a container that's capable of running on the Windows host directly with direct access to anything that's available on the Windows host. So with your standard Windows server process isolated container, the process isolated container is running in its own silo, meaning it has its own copies of Windows services binaries and other various binaries that are specific to the container. With a host process container, the container image is literally running just as a process on the host. So it has its own volume mounted file space that's part of the host, but the host process container, depending on the user permissions that you're running it as has access to all of the resources on the host. So you can do things like install drivers and a whole bunch of other management scenarios that apply to the host directly. So you can use this as a way to set up and configure your Windows host in your clusters and set up your Windows nodes so that you can run with a reduced set of privileges in your other Windows nodes and process isolated or hybrid containers. So this enables a whole bunch of scenarios that makes it easier for you to run your regular server containers more securely. So let's go to the next slide. So host process containers are, like I said before, a method for packaging and the packaging and distribution of management operations which require access to the host. They should not be used as a methods for deploying server workloads or your containerized workload that you want to run applications with or anything that requires security because these do have access to the Windows and the host that are running Windows nodes. So we created this purely as a method for performing management operations and an ability to reduce the surface area of your security of regular Windows server containers. And these are Windows job objects and they run directly on the host. So anything that you can do with a process, you can do with Windows host process container. So like I said before, you're able to install a file system and access the host file system and install drivers and system services and you can reduce the privilege of your other Windows nodes. These are not process of file system isolated like a process isolated container. So these do not have their own namespace. Sure, they have a volume mounted file space which they can kind of use to set up their own container specific stuff but they will still have access to the full host. And they're not directly synonymous to the Linux privilege containers. Although they do enable similar scenarios, there are different security policies and there are different nuances that result of the differences between the Windows architecture and Linux architecture. These will be supported on your D only as we've made a lot of the changes specific to host process containers in container D. And like I said before, there are not a replacement for Windows Server containers. They're a method for management and configuration of Windows nodes to run Windows Server containers. So let's jump into a demo, detailing the specifics of how to use a host process containers in a kind of a scenario. Awesome, awesome. All right. So as Brandon just stated, the demo is going to show off installing and getting Calico running on a Windows node with the cluster API for Azure and via host process containers. So for some background on this, currently to install Calico on Windows nodes, you need to install it as a Windows service, generally be the script that they provide which is kind of a pain. A lot of the case components for Windows are running this exact same manner and it puts kind of an extra burden on the one setting up the cluster if you're the one doing it manually or on the cloud provider to set it up. So one of the top things, at least interest-wise that we'd seen for host process containers was trying to solve this exact problem. So moving away from this model, needing to deploy so many things on Windows as Windows services and try and move everything to the same kind of containerized deployment model that everyone's already accustomed to. So instead of running these things as services, we could just package them in a container image and then run them via host process container. This gives you kind of the freedom of the container ecosystem. Updates are a whole lot easier. Basically just pull a new image and then redeploy as you would with regular Windows server containers without any kind of custom upgrade logic. So I'm going to start the demo. And like I said, this demo is basically just showing that we've gotten Calico working via host process containers in their alpha state right now without having to run them as Windows services. So in the bottom tab, you can see that we have a set of Windows and Linux nodes and the Linux nodes are up on operational and Calico's already up and running on them. And you might notice that the Windows server node is currently in a not ready state and we'll find out very shortly why this is. So we're going to describe the node and try and figure out what is going on. So if we scroll up a bit and we found our culprit, Kuglet not ready and it's because the CNI plugin is not initialized. So pretty much the rest of the demo is trying to solve this. So getting it to run. On the top tab, we're going to SSH into the not ready Windows node to try and figure out why the CNI plugin is not initialized. So we're going to check out some state. We're going to check the location where the container runtime is trying to look for the CNI configuration files and empty. Then we're going to check where the container runtime is looking for the CNI plugins, also empty. So that is not good. So let's try and solve that. So we're going to open up an editor window with the spec that we're going to apply on the node to try and solve this with the host processes container. Go all the way to the top. We have a config map and data section. CNI network config is the CNI configuration that we're going to write out in place on this machine. Try and get things solved. What's in it isn't too important for this. Let's go all the way down and I'm going to pause here again. And this is the new host process option on the security context, specifically the Windows option. If you're familiar with privileged containers on Linux it's kind of the same thing, except obviously it's a bool on the Linux options and name privilege, but it's pretty much the same use case here. You set this to true and you're asking for a host process pod. You can see you run as username. This is not a new field, but this time we're asking the host process container to run as NT authority system, which is a built-in Windows service account. It's actually the same account that Windows services run as by default. So Calico running as a service launch it manually. There's really no behavioral difference here in launching this as a host process container. You can see we have an inner container defined and the comment might give it away, but this inner container is going to install the CNI binaries and the network configuration files that it's going to look for. So this will run before everything else. Scroll down a bit and we have two containers, one that the name of this might give away. This is going to set up the node to run Calico. That's going to run the node service power shell script that they provide. And then scroll down a bit more and we have the second container that's going to run the Calico Felix binary with the Felix service power shell script also. So now we're going to apply this and wait for a state transition to see that the thing's actually running. See at the bottom, we should get initializing in a second, yep, and then wait for it to run. And now we go back to the top tab and if we go last on the directory that we were on for the CNI plugins, they're actually there now, which is awesome. Let's check out if the configuration files are there also and they're there also. And let's just cap the show that, no funny business is going on. It's the same thing that was present in the spec that's now written out on disk, cool, cool. And then one final thing for this is wouldn't really be a demo without showing that network connectivity actually works for this. So I'm going to skip ahead a bit because there's some setup. We're going to deploy an IAS workload, rather than IAS pod, and then see if we can actually access it. So this is the IP that we entered and gracefully wait for IAS to boot up. As that boots Danny really quickly, the real reason this is real exciting is Docker we didn't, we had in the container D days, you don't have that bridge, right? So you can't rely on the bridge for these host process networks anymore. And so it's very critical. So it's really exciting. James said this to me, I was like beyond excited. I was like all our demos before were kind of just like cry CTL. Okay. So I'll wait for this thing to boot up and we're able to access the server, which is awesome. And then the final thing to show, just to show that Calico is actually involved here, we can check out the logs for the Calico pod and we should be able to see all the way at the bottom that we have some entries for setting up the HNS endpoint for the IAS pod. So everything seemed to work out, which is awesome. And that is the demo pretty much. I hope this was cool to see and kind of gives you an idea of the types of things that these can be used for to kind of simplify administrative tasks at least. And yeah, thanks for watching that. We're really excited about all the possibilities that this brings at the table. And awesome. Thank you for the demo, Danny. I think this everybody is excited about all of the different possibilities that this can bring. For anybody who's interested, here's some more information about how you can configure your cluster to run host process containers and more information about what the future holds. Jay, I'm going to hand it over to you now for some thank yous. Okay, we've got a lot of people that have been ramping up new people and some old friends as well. Perry of course worked hard with Danny and the rest of the container team on helping out the host process stuff. These are all of course people that aren't here on this talk. Amim is a new contributor. He got Calico and stuff working on our internal developer environments and also put a patch up to Calico node to make it easier to run it on different interfaces. Arvind has been in the community for a long time. He's answering questions, he's got some caps he's working on and so Ravi, same thing there. And he also works a lot on our SIG test stuff and it's really hardening the definition of some stuff. Jordan Legget, everybody knows him, he helps us with the API reviews and so on and so forth. And Antonio helped us patch the user space, Kube Proxy and Jamie Phillips and Luther are friends over at Rancher. They're always there to help the community out. Sebastian over at Red Hat is also getting involved with some stuff, trying to fix default interfaces for Windows. And we have some other new contributors. Hongsheng has been working on making it easy to distribute Windows binaries in a simple way for image building and stuff. Wenli is working on reboot tests and she's getting involved there and that's exciting because it's one of our new tests that uses host process containers so that we can have really robust tests of Windows rebooting, which is real important to Windows upgrades and all sorts of other things that can happen. Stuart Preston over here at VMWare is working on getting 2022 working with ImageBuilder and testing that and John Schnacky for getting Windows integrated into Sonobli so we can easily test Windows on Sonobli. That is exciting. Thank you. And yeah, that's not all though. We've got a lot of work to do. So come join us. I think we're one of the most fun SIGs that I've ever worked in. We move quickly on things and we work together. We pair on things. We have these pairing sessions after every single one of our SIG Windows meetings and we just get in there and we build stuff together. Anybody can join. You don't have to understand Windows to do it. Most of it's just go laying and it's just like anything else. And we have community meetings of course every 12, 30 EST on Tuesdays and there's a lot of opportunities for leadership there as well on various sub initiatives of the SIG. For example, the stuff Danny presented. I mean, there's so much interesting stuff you can explore around privileged containers, getting those working, et cetera. And of course there's always documentation, stuff like that. You just always testing stuff, trying stuff out, reporting bugs and we've got a project board. So if you wanna go look at the project board and just grab a ticket and work on your own, that's fine too. You don't have to hang out with us. There's all sorts of ways to contribute. And that's pretty much it. So we've got Mark as Mark's our chair and me, James and Claudio are tech leads and we can reach any of us at any time. We're always happy to get in there and work on stuff with you, look at stuff. Join us in SIG Windows on Slack. That's our rallying point for the community. And all the other well-known mailing lists and like a GitHub repose and everything else. So yeah, come get involved. All right, thank you everybody who is speaking today. The community really appreciates your all of these updates and all of the information. And thank you everybody for attending the SIG Windows maintainer series session. We understand that this is a hybrid event and unfortunately nobody from SIG Windows will be able to present live but we hope to be able to do so in the future. And now Q and A.