 All right, we're gonna go ahead and start up. I'd like to thank everybody joining us today. We've got a nice group of attendees out there. Welcome to today's CNCF webinar. Get your Windows apps ready for Kubernetes. I'm Randy Abernathy with RXM. I'm a cloud-native ambassador and I'll be your host. We're going to also welcome today's presenters, Steven Follis, manager of Solutions Engineering at Verantis, couple of housekeeping items before we get started. During the webinar, you're not able to talk as an attendee. So there's a Q&A box at the bottom of your screen where you can drop in questions. And what we'll do is we'll just sort of collect up the questions as the webinar progresses and then at the end, we'll use whatever time we've got to kind of knock down as many of those questions as we can get to. So feel free to add questions at any point in time and at the end we'll kind of pick up as many as we can. So this is an official webinar of the CNCF and as such, it's subject to the CNCF Code of Conduct. So please don't put anything in the chat or in the questions area that would violate that Code of Conduct. Just be respectful, it's the bottom line. And so really with that, I'll hand it over to Steven to kick off today's presentation, Steven. Excellent, thank you so much, Randy, and welcome everyone to this webinar today. I thank you for trusting us with your time this morning, afternoon, or evening, wherever you might be. My name is Steven Follis. As Randy said, I run a team of solution engineers at Morantis. Prior to that, I spent almost three years at Docker working with container customers of all shapes and sizes. And I'm excited today to talk about the intersection point between the Kubernetes container orchestrator and Windows workloads. And so today we're gonna be talking a little bit about, you know, why Windows containers? It's one of the most common questions I get when we talk about this topic of why would you even wanna do that? So what's the purpose there? So we'll set some context around why running Windows workloads with Kubernetes makes sense. We'll then move into some use cases, some common rationale and common things that we're seeing customers choosing this technology approach for. And then we'll spend the majority of our time talking about what I'm calling real world consideration. So things that we've seen in working with customers to help make your own adoption of this technology as smooth as possible. I do have some time for a demo today. And as Randy said, we'll leave some time for Q&A here at the end. So with that, let's jump right on in. So as far as why Windows containers, this is a question we get all the time, you know, containers are a Linux thing. They came from the Linux world, C groups, namespaces, baked in the Linux kernel. Why are we talking about Windows? Well, as many of you may be aware, beginning with Windows Server 2016, Microsoft introduced support for Windows containers, which are analogous and very similar to their Linux container cousins. This then allowed us to take legacy.net based workloads running on Windows Server, containerize them and then get the benefits of containers with those Windows based workloads. And so one of the things that we've seen over the last several years is that there are still a lot of Windows applications out there in the world today. So the IDC just a few years ago, estimated that over 70% of applications that are running on premises today are running on some sort of flavor of Windows Server, be that 2012, 2008, 2003, 2000, apps that have been around for a long time. So while we oftentimes in the industry really enjoy talking about the new shiny microservice framework or things like Node.js, GoLang, Rust, Python, in reality, there's a significant amount of applications that are out there in the world today that can use the benefits of containers to breathe new life into them. And so this is by targeting these on-prem workloads, it allows folks that are operating on-prem environments to shrink that footprint and become more efficient with how they're able to operate those workloads today. Secondly, from a people perspective, we still see a lot of developers out there that are utilizing the C-Sharp programming language. And so nearly a third of respondents to the Stack Overflow Developer Survey are stating that they're still writing a lot of C-Sharp, which predominantly and historically is associated with Windows Server based workloads. There are certainly with .NET Core that started to change, but a lot of the C-Sharp out there is still in the .NET framework family. Speaking of frameworks, we still see ASP.NET and .NET near the top of lists from Stack Overflow around the top framework choices that are being used today out in the industry. And so hopefully all these numbers are painting a picture, there's still a lot of applications out there that we can go after and that we can focus on containerizing and getting the benefits around portability, around scalability, and then all the capabilities of Kubernetes alongside them as well. So from a use case perspective, we see a variety of values that folks find with containers and Kubernetes. The first is around consistent operations. Historically, we've had different siloed stacks of how we built and ran applications. So over here, I may have my Linux team that are running kind of engine X-based applications and they do things one way. And then over on another side of the organization, we have the Windows team building and running Windows apps differently. There's large discrepancies between the two there. What Kubernetes allows us to do nowadays is to take one standard cluster, the Kubernetes technology and then be able to run a Linux-based application that we built last quarter and a Windows app that we built last decade and run them side by side on the exact same infrastructure. This greatly decreases operational complexity by breaking down those silos and having consistency between how we run applications. It allows us to set up one set of firewall rules, one set of infra, one set of RBAC. We do everything once rather than every single time for each different kind of application. So consistency is a big benefit that we get with a myriad of different applications running on Kubernetes. Next is around legacy.net workloads. We stated a second ago that there's a lot of net out there and those applications are typically in need of some tender love and care. Right now, you may have developers that are building greenfield kind of net new applications and can take advantage of very seamless deployments and build experiences. Whereas the teams that are supporting legacy.net apps are oftentimes still running manual build scripts. They're running lots of PowerShell scripts to be able to build and compile those applications. They're opening multiple RDP windows to different servers and copying and pasting web deploy packages between them. There's oftentimes a lot of manual work that's being done because back when those applications were created, there weren't a whole lot of better options there. And so containers allow us to bring characteristics of more modern frameworks and be able to use that with this more legacy code. So we get a smoother deployment experience, a smoother scalability experience and just a better operational context in what we've had historically. Third is cost savings. This is really one of the biggest. Oftentimes, especially on prem, we have VM sprawl. We have lots and lots of VMs everywhere, hundreds, thousands, tens of thousands of VMs. Many times operating at 40% utilization at best, sometimes 30, 20 or below. So not very utilized VMs. And it makes sense. Nobody wants to run out of resources so we over provision those VMs. That can be costly to manage, costly to maintain and can just be costly in general. And so containers allow us to take the exact same applications that are running on those VMs and be able to run them on less infrastructure through the multi-tenancy nature of Kubernetes. So we can run them side by side and get much higher utilization in a Kubernetes cluster than we can on many standalone VMs. And so it's cheaper from an infrastructure perspective. It's cheaper from a management perspective. And then for those users that are looking at public cloud as an option, what we find is that that VM that's running right now on prem at 20% utilization, we already expense that whole blade server. That's not a big deal. But as we go to the public cloud and we start being charged on an OPEX model where every single second we're being charged, 80% of an unused VM starts to get pricey, right? So by utilizing less resources either on-prem or in the cloud we can save substantial money when it comes to operations. Speaking of the cloud, cloud migration is often front and center in many organizations today. Containerizing creates a very elegant way to be able to shift applications from running on an on-premises context up to a public cloud or even multiple public clouds or hybrid. Containers are intrinsically portable. It's one of the best benefits of them. And so by containerizing on Windows app we're able to more easily move that to the cloud. So we can containerize today and then in six or 12 months down the road choose to move that to the public cloud or we can containerize and move to the public cloud all at once. So we have some options there on where we want to run the application and it enhances the flexibility of when and how we moved to that public cloud or back on-prem if need be. And then finally dev ops. Applications that have been built over the last four to five years have been able to benefit from the dev ops pipelines. Continuous integration, continuous deployment, automation of all the things. However, on the Windows side of the house we haven't always had the ability to use those types of practices with our legacy applications. So from a dev ops perspective we can now have very consistent pipelines of how we build Linux containers and then how we build Windows containers with those different kinds of applications thus simplifying our processes and taking advantage of a lot of the automation that dev ops and Kubernetes afford for us. So these are five of the main buckets that we see interest around as far as why to containerize and run apps on Kubernetes. But the number one reason that we've seen over the last three years is all around Windows Server 2008. So you may have seen on Twitter or blogs yesterday but as of yesterday Windows Server 2008 is now officially end of life. And so this means there are no more security patches or hot fixes coming from Microsoft. We are past the extended support date, standard and extended support. There's nothing else coming there. And so there are, that said, there are still a large number of Windows Server 2008 servers that are out in the world today. And so those now as of yesterday are starting to represent severe vulnerabilities from a security perspective, from an audit perspective, from a governance perspective. And there's a bit of a gold rush now to try to move workloads off of server 08 and onto more modern and supported operating systems such as Windows Server 2016, 2019, et cetera. And so this has been a big driving force over the past year as everyone looking at, well, what are we gonna do with all these workloads that we have running right now in 08? And there are several options. The main ones we see are, hey, we're gonna refactor and upgrade this application. So we're gonna take this .NET framework app, tear it apart, let's go recode it and the brand new shiny .NET core, move it on to Windows Server 2019 and we're good to go. That's a great pathway, but it typically doesn't scale very well. So if you have one or two applications that may be an option, but when we're talking about a portfolio of 100 or 500 or 1,000 applications, there's simply not enough manpower and time to be able to go in and undertake such a massive shift in application. There's a lot of rework. Many line of business apps may not have a real visible ROI for going through and investing that much time in resources. So this refactor and redevelop the application can be challenging. Next is a custom support agreement from Microsoft. They was typically come with a lot of zeros on them. So now that we're after that date, it's simply not practical for many organizations based on the price tag of having very custom bespoke support from Microsoft for those workloads that are running. Best case scenario is that it's a temporary band date that you're still going to have to address those applications sooner rather at some point down the road. It's really kicking the can down the road. Similarly, many cloud providers are offering limited support, really just some security fixes. If you lift and shift those virtual machines from on-prem up to a public cloud provider. So that is an option. Again, you're still running on an outdated operating system in the cloud just like you were on-prem except that it may cost more at this point because now the pricing model of the cloud is different. As that lift and shift is an option but it's also delaying the inevitable. You're going to have to do something for those applications. So those three are options, but the one that we've found to be the most compelling is around containerization. So we can take an application that's running.net framework on a VM today. We containerize the application itself with its configuration, thus decoupling that app from the underlying operating system. Once containerized, we can throw away that server await environment and move the container forward to Windows Server 2019. And oftentimes without code change. Usually it's configuration change, adjustment to web config files based on environment, those types of things. And so once containerized, we then deploy onto Kubernetes and then we get a lot of those benefits of the deployment model without having to undergo substantial and costly redevelopment efforts. Furthermore, if we fast forward to two or three years down the road, if we need to go through this exercise again, we simply in our Docker file change the from statement to a different version of Windows, recompile the container and now we're able to cope with additional upgrades down the road. So it's a more future proof path than investing all that manual effort today. So this containerization path is a really compelling way to be able to mitigate the Windows 2008 that are still within environments and provide a more future looking way of running that application code. Okay, so now as far as kind of why we're going to go and run this, let's talk about kind of how and some of the things to think about as you're beginning this journey down the path towards Windows applications on Kubernetes. So the first place I like to start is around the server operating system. So what are you going to run these containers on? I mentioned that Windows Server 2016 introduced support for containers, but there are actually a couple of different main branches of how we can go out and get Windows Server. So this may be review for many of you, but I like to touch on it because it's not as universally known as I would expect, but Windows does come in two different flavors. The first is what they call the Long Term Servicing Channel or LTSC. This is when you see Windows Server 2019, 2016, 2012, those are all on the Long Term Servicing Channel and really is what I think about when I think about Windows Server, right? So we get a new version every two or three years. We have the five plus five of support, five mainstream, then five of extended. And this is really focused on stability and predictability. So Microsoft is typically not shipping bleeding edge features into the LTSC because they don't want to rock the boat. This is where historically we've built mission critical workloads on top of the very stable LTSC. Starting with Windows Server 2016, there was also a second channel that has been opened up and they call it the Semi Annual Channel or SAC. So if you've seen Windows Server comma version 1909, version 1903, 1809, 1709, any of those types of numbering scheme, that's related to the Semi Annual Channel. And this isn't a very different release channel. So you can't kind of bounce back and forth between the two when you install a server, you select kind of one path or the other. And the Semi Annual Channel, this is different and that's, they ship at Microsoft ships new version every six months, typically in the spring and in the fall. 1909 was just released a few months ago. These come with all the latest features. So these are much more bleeding edge. And so for technologies that are emerging, such as containers and Kubernetes, the Semi Annual Channel is where Microsoft puts all of their newest and shiniest new features there first. And then every time they cut a new LTSC, those features typically are then rolled back into that LTSC branch. The downside to this that we often see here is that the SAC comes with 18 months of support and that is oftentimes too short of a window for some organizations, especially for those that may be on premises that have very stringent qualifications and processes for approving operating systems. So you may be saying, look, hey, we just got server 2016 approved six months ago. You know, there's no way we can move to something this quickly and that's understandable. So typically, if you're in the cloud, Microsoft and Renzes, we would recommend looking at the Semi Annual Channel. That's where all the bugs are being fixed first, where new features are being released. You're going to have a better Kubernetes experience on the Semi Annual Channel, but you can still use Kubernetes with Windows over 2019 on the LTSC as well. So that host operating system matters. What also matters is the base image that you're building off of. So in the container world, you may be aware that everything is built up off a concept of layers. It's not one large blob or one big binary, but instead several distinct layers that make up a container image. In the Linux world, you may be used to building on Alpine-based images or Busybox or Scratch. They're one of thousands that are available on the Docker Hub. In the Windows world, everything derives from three base images. The first being Nano Server. This is the smallest image that's available. On the latest Semi Annual Channel of 1909, this clocks in around 100 megs. It'll be larger on different versions of Windows. The size has changed quite drastically, to be honest. Nano Server is great for greenfield and cloud native apps. So if you were building a .NET Core application or Node.js and you wanted to run that with Windows, you would typically choose Nano Server to do that. So it's also, if you had any kind of PowerShell scripts or kind of automation or ETL type jobs, you may be able to put that into a Nano Server to keep the container size small. And it's typically when you don't need full .NET framework. If you do need the full .NET framework, that gets us into the second base image, which is Server Core. Server Core is really, is our bread and butter when we talk about legacy .NET applications because we have full .NET framework. And so it's really targeting those legacy-based applications that have been around for five, 10 plus years. And so we can go in and install those types of applications into Server Core and have all of the components that they need to operate. If there's an application that you're containerizing and you hit a roadblock because it's missing some component of Windows, we also now have a third option that was introduced with Windows Server 2019. And that's this Windows-based image. This carries in the most Windows components that are available and goes really above and beyond what we have in Server Core. So if your application makes heavy use of Win32 APIs or other pieces of Windows, this would be kind of a last stop of something to try to be able to containerize that app. It's more expensive than Server Core. What I've personally seen Windows more in this Windows-based image utilized is for customers that are looking to do things like headless browser testing. So we have something like a Selenium grid where we're going to spin up servers. We check in some code, spin up some servers and then do browser testing and we wanna test on Windows. That the Selenium driver pieces historically have not worked with Server Core, but they did work with this Windows-based image. And so that provides us more capabilities. We typically recommend sticking to whichever base image is going to result in the smallest container available. So while something may work in Windows as a base image, Server Core, if it'll work in Server Core, we always recommend Server Core and likewise down to Nano Server if it would run there. Keep things nice, small and efficient from an image perspective. So why we're talking so much around OS versions and base images is that in the Linux world, you're likely familiar with the fact that we don't really have to care that much about the host. As long as it's got a container runtime, we're typically good to go in Linux. On Windows, though, we have to care about version compatibility. And so I've got a small chart here. It's a simplified version of what's in the Microsoft docs here on this link at the bottom. But you can see here from a host OS version, if we're running Windows Server 2019 as our host, then we need to make sure we match and align the base image that's running inside of that container. So we would need to have a Windows Server 2019 image that's inside of that container. If we had a base image of 2019 and tried to then run a container image based on 1903, it's going to stop us not allow us to run that workload. That's because the Windows API and kernel has been undergoing such change in the last several years. We wanna ensure that the host and the container match up so that we don't have any errors or weird issues with a mismatch between those two. And so this is based on what we call process isolation, which is the default mode of running containers both in Linux and in Windows. And so in a process isolation, if I have five containers on the same host, they're all sharing the host kernel and that host operating system. Alternatively in Windows, we do have a second option that we call Hyper-V isolation. Hyper-V isolation is specific to Windows containers, but it allows us to wrap a running container in a very thin virtual machine where that container gets a dedicated version of the kernel just for that one container. And so historically for standalone Windows server containers that are running without orchestration or for other orchestrators, we've been able to utilize this Hyper-V isolation, but we are in the process of bringing that in the Kubernetes. So it's currently at an alpha feature. So not quite ready for primetime today, but over the year, over this next year in 2020, we expect that to mature greatly. And it's under active development here to bring this option here. What Hyper-V isolation allows us to do is a little bit more flexibility around version compact. So in here, if I had a host operating system of 1903 that's running in my cluster and someone gave me an image that was based on 2019, I could then run that container image on top of that host utilizing Hyper-V isolation rather than process isolation. So again, coming soon, but wanted to kind of mention it now so that you can be thinking about that in the future and ensuring that you're matching up of these versions here. When it comes to the cluster makeup with Windows server, this should look very similar to you. So in Kubernetes, we have this notion of master nodes that are the brains of the operation. I think of them as air traffic controllers that are doing all of the kind of the organization and operations of the cluster. And then we have the worker nodes that are running our actual container workloads that we specify that we give Kubernetes to run. Historically, we've had all Linux. So the masters were Linux, the workers were Linux. As we've introduced support for Windows server into the environment, they are available as Windows worker nodes only. So one of the questions we get all the time is, hey, we're a Windows shop. We don't really like running Linux VMs if we don't have to. Can we do a full, pure 100% Windows server cluster and still run Kubernetes? And so the short answer right now is no. We do require that master nodes are running the Linux operating system and then we can add one or more Windows server worker nodes that are there today. And so right now we do need a mix of Linux and Windows in the environment. So at this point, if I had a cluster with a mixture of worker nodes and I wanted to go deploy a Windows container, say running IIS, and I gave the pod spec on all my YAML, gave it to, you did a cube, cut'll apply, and the masters went to schedule it onto a node, that Windows container could potentially wind up on any of these four worker nodes, Linux or Windows. If a Windows pod was to land on top of a Linux node to be scheduled though, we'd have an error, it would not be able to operate that. And so eventually Kubernetes may retry and try to reschedule, but what we'd like to do is have a more elegant approach to ensuring that Linux pods run on Linux nodes and Windows pods run on Windows nodes. To do that, we utilize features inside of Kubernetes, specifically taints and tolerations. So what I could do here is I could use cube cut'll to go and to taint those nodes with an OS of Windows equals no schedule. And then that would block Windows nodes from running on those Windows pods from running on those nodes. When I go to deploy an actual Windows pod though, I could then specify a toleration that would allow me to go on top of that taint and the schedule. And then I could even use something like a node selector to ensure that the pod is being landed on a Windows node. So over here on the right, I have three worker nodes. We have a Linux, a Windows and a Linux. And so if I'm deploying with this node selector of Windows, that pod's going to land on a node that has that labeled where we have that match. So we've certainly done this in various ways with labels. We're simply applying that same technique to Windows instead. So we didn't go reinvent the wheel here. We're utilizing currently available techniques inside of Kubernetes. Starting with cube 1.17, we also have the ability to have a node selector based on this Windows-build command. So you can see here on the bottom we have 10.0.1763. That is the build number of Windows Server 2019, that LTSC branch. So if I had a Kubernetes cluster that was mixing for some reason of LTSC nodes and SAC nodes, I can further specify my node selector, which node that to land on based on that node selector command there. So we're getting even more granular, not just Windows, but that LTSC or that particular build number of Windows is there available as well. This was something to think about. The very first Windows pod is that I've scheduled, I was trying to figure out, hey, why are these not coming up and working? Well, it's because they were trying, the Kubernetes was trying to assign those pods onto a Linux node that is not going to work. You've got to make sure that we've got node selectors and taints there to ensure that the pod lands on the right node. Some general things to keep in mind. We talked a lot about Windows Server 2019. That really is the floor of support for Kubernetes. So starting with Server 2019, a significant work went into making Server 19 work with Kubernetes. And so if you have some dependencies there on 2016, that's where either standalone servers or alternative orchestrators may be an option, but we need Server 2019 or version 1809 on the semiannual channel to be able to support Kubernetes or newer. Which is on the node selector piece, always something to keep in mind. There's a concept in the Linux world called privileged containers, which is where a container has, I don't wanna say carte blanche to the host, but significant access to the host of what it's able to do. They're special and they run in a much higher privileged mode. And so these are good for some daemons and agents and other use cases and are common in the Linux world, but we are unable to do that in the Windows world. The way that Windows containers are put together, we don't have the ability for a privileged container. And so that limitation pops up in various times. I wanted to kind of mention that if you're wanting to have a Windows container that is somehow manipulating the Windows registry of the host, that's going to be blocked. There's not a way to be able to do that. You could be able to edit the registry inside of the container, but we have more of a separation and more of a security boundary there than what we have in Linux. We touched on the Linux master pieces. And then finally, when you're creating pods and you're setting resource constraints or allocating resources rather, you really need to bump up the minimums in which you may be used to in the Linux world. An IIS image running on Windows Server Core, which is a large, five gigabyte sized image, is going to require much more resources than an Nginx container running in Linux. That may be 10 megs, for example. So you will need more resources when we go to deploy Windows containers. They're simply heavier based on the other components of Windows that are running inside of that container. They need more resources there. Something to keep in mind, it's very easy to accidentally starve pods by not allocating enough resources when you're running those pods. For a little bit of history, we mentioned starting way back in 2016 with the initial container support in 2017, the container network interface work began around CNI to get the networking going with Kubernetes and then work continued throughout 2017 and 2018, culminating in a release back in March of 2019. This was the stable release that initially brought support for Windows Server and to the Kubernetes project. So starting last March, we got the thumbs up, kind of the GA release there. It says we're stable and we're good to go. Each subsequent release after 1.14, the groups at CIG, Windows, Microsoft, Docker, at VMware, and a variety of companies that are involved in the community have worked to add feature after feature into Windows Server. And so 1.15, we got alpha support for GMSAs. We'll talk about why those are important in a moment. In 1.16, we started initial support for CSI and some support around storage. When in 1.17, started looking at how we can change the user that we're running inside of the container with this run as username flag that's available. So we'll see additional features this year with 1.18, 1.19, et cetera as they come down the path, but know that there's active work going on. The work is just beginning around Windows Server and Kubernetes, it's not completed. So there'll be new features in each release to look forward to and to expand the capabilities of the platform for .NET based applications. So to get into different considerations, the very first thing that we often run across is identity. So we've containerized an app, we go to run it, and boom, we've got a problem. We're having issues with identity. And so we always suggest that folks take a good look at how applications work beforehand to understand how they work. Are they using basic auth, forms-based, integrated Windows auth with some like Kerberos or NTLM? And then also looking at what are the resources the app needs to talk to? Are there certain databases, file shares, MSMQs that an app needs to work with? We need to know that ahead of time. If we're utilizing integrated Windows auth, we're going to need some additional considerations as well. And so what we find is that the vast majority of apps over the past 10 years have been written with integrated Windows auth. IWA has been very simple from a developer's perspective. I clicked a little box and voila, I have authentication in my app, very convenient. And the way that that magic worked was because I was typically running on a web server that was in close coordination with an Active Directory domain controller. So that server is what we call domain joined. And so then when that application ran, I could have the web server and the host itself talk to Active Directory and handle that user authentication for me outside of my app code. Very convenient and very easy to operate. In the container world, we do not domain join every single container or every single pod to a domain controller. Instead, the pattern is that we domain join each of the host nodes that we're running on top of. So if I have 10 Windows server, 1909 worker nodes, I would domain join each of those to a domain controller. And then I would then utilize an Active Directory component. Something called an Active Directory Group Managed Service Account. This is an AD thing, it's not a container thing. It's a component that's been in AD for several releases. And it's essentially a passwordless service account that I can load into a container and then assign permissions for that across my network. So the way this works is that if I'm running a kubectl apply to go create a pod, I pass in a credential spec, which is simply a JSON representation of that service account. And then on the server that it lands on and that the pod is being scheduled, the host compute service of Windows picks that up and says, hey, this is a special pod. It's not kind of a regular identity. I actually need to go and send that credential spec to the container credential guard and other component of Windows so that it can talk to Active Directory to generate and maintain Kerberos tickets. Then I'll create the container and I'll set the identity to that specific service account so that anytime that container calls out of the container to a database or a file share, it'll then utilize that service credential, not the default identity that's there. So this is something that I like to talk about first because it's almost always the first thing we run into. And in many organizations, it means coordination with your Active Directory team, talking with that identity team around, hey, we're gonna need these service accounts. We need to add these permissions and grant these permissions to those service accounts. And there's typically some legwork that needs to be done to be able to enable Kubernetes to work with integrated Windows off based applications. Inside of Kubernetes now, we have a custom resource definition, a CRD around GMSA credential spec or we're able to pass in information around our domain, around the agent, the SID for the account, the domain name, NetBios, standard AD information that we can pass into Kubernetes. And so when we create a pod, this will be loaded in and that identity will be then utilized as part of Kubernetes. So this feature has gone from alpha to beta over the past six months and excited to see this reach stability here shortly. But a very key foundational tech for when we're working with Windows containers. The next piece that comes up is oftentimes we may hit a problem or we wanna keep tabs on what's happening inside of the container. For Linux apps, these typically log to standard out. So if I'm eating to go in and see what's happening in the container, I can go run a Docker logs command. I can run an interactive session with my container in time, something like Docker run or a cube cuddle logs. Any of these commands are really built to take the standard out of the container and pass it through back to me as the user so I can see what's going on. Alternatively in the Windows world, Windows apps don't log to a standard out. So they typically go to the event tracing for Windows, ETW, they go to event logs, they go to custom files, they go to different spots. And so if you've ever run Docker logs on a Windows container, it's pretty anticlimactic. There's not a whole lot that's there because there's nothing being sent out to the standard out. It's being sent to different areas inside of that container. So to address this, Microsoft spent some time over the past year doing some great work around a tool called this log monitor tool. So this is a small binary that's going to help us get the same experience that we have on Linux where we go application to standard out and then have our container runtime pick those up with Docker log or cube cuddle logs. We can bring a similar experience to Windows. So in this example on the right, we have a Windows container, our app and services are writing in these metrics locations. We have a log monitor binary that's reading those, passing them to standard out and then the runtime is then able to run Docker logs and cube cuddle logs to be able to get that information. So very handy and very useful and good to put in to adopt this earlier rather than later. It's when you're trying to debug an application that's not working in that container, having this log monitor tool can make a big difference because otherwise you're typically doing a Docker exec or a cube cuddle exec session inside of the container and trying to parse through XML or error codes and an exec session and it's quite painful. This makes life a little bit easier and allows you to integrate into third party logging tools that are built on that standard out of the container to get their information there. So very useful. It's available on GitHub. It's open source, you can grab it and they've got several Kubernetes features on the roadmap that are coming for things like config maps, like sidecar patterns, lots of goodies that are coming down the line there. So with that, thank you for sitting through all those slides, want to jump into some demos and make this a little real for folks as well. So what I've got here is that I have a Windows Server 2008 virtual machine. I went ahead and just prerecorded this so I didn't break anything. But it's running here is Windows Server 2008 and a virtual machine. This is a web server running IAS, it's IS7 to be specific. And also running on this site is something called the job site starter kit. This is about the oldest site I could find on the internet. It's a .NET 2.0 app that came off the CodePlex site which is a long, long gone, kind of a pre-github thing. It's a two tier application talking to a server, SQL Server 2008 database. I can log in with the username and password. And from there I can post resumes, I can look at jobs, I can set a company profile. Think of this as an incredibly low fidelity LinkedIn but a very old app, 10 plus years old but it's simulating a line of business app that may be running in your organization today. So to get more information about the app and how it's running, I often like going to the IAS manager to get information about the app. How are these app pools configured? How many app pools are there? I can see I've got a jobs app pool so that's running there with one app. It's running .NET 2.0 and we've got a service account, an app pool identity there, a jobs dash SVC. So that's good information to know. When we go to sites, we see we've got our little job site here. We can go in and see different configuration, different, user authentication can be looked at here as any kind of SSL bindings. We can come in and see the source code that's available here with our master pages and our ASPX pages. That's all here on the local disk. Now we're gonna copy all of these files up. I'm gonna send them over to a machine that has a container runtime that has Docker installed. And so we're gonna move this on from a server 08VM over to a server 2019VM. And so we have all that exact same code right here brought it over as really a starting point. From here, we're going to open up Visual Studio Code or your editor of choice for that application to start building out a Docker file to create a container from that code. So we can see there are application codes here. We'll go and open up on that Docker file and we'll kind of talk through the pieces of how it was fit together. So up at the top, we set a new escape character, makes it easier for Windows because the slash is a new line character in Linux. It gets kind of wonky. This little line one is a really nice piece there to allow it for Windows. We then use a base image running on an LTSC 2019 base image of Windows Server Core. And this is actually an ASP net image. So Microsoft has not only the raw base images but they build some derivative images on top. That includes a default website. So we remove that. We come in and configure any OS level operating system features that we need. So in the web world, this would be things like directory browsing, you know, HTTPS errors, ISAPI is a common one, static content, very common pieces that we would need for our application to run from an OS level. So we run that with the standard PowerShell commandlets. Now containers come in with all the ports shut down. So we have to expose the ports that we're going to use for that web application. In this case, we've got three different ports we're going to open up here. And then we start using standard PowerShell commandlets for web administration to configure an application pool. So we create the app pool, we set the identity there to local system. It needs to be local system or network service to be able to use the GMSA. We then copy in the physical pass of all of our files. If we wanted to handle SSL in the container, we can copy certificates. We can set ACLs so that IS can access all those files. And then we go and we initialize that website. So we set up an IS a site, we set up bindings, any certs and do any of the other configuration that we need for IS there. Typically we could stop there, but what I like to do in a lot of images, especially if you're debugging, is to enable remote administration for IS. So these are a few extra lines that are going to allow us to use that IIS manager GUI against a container. Makes it a heck of a lot easier to figure out what's going wrong or what's functioning there. Finally, we'll add support for that log monitor we just talked about, where we download a binary, we add it into the container. We set a config object to say which areas of the container we want to monitor. And then we set a shell and an entry point. So the log monitor started as part of that IIS process there. So that's a standard kind of rundown of what we do with a Docker file to then be able to build and run that container. So to start off with here, I want to run a couple of containers before we get to our custom one to show kind of what that experience looks like for Windows. So we're going to set a couple of variables really just for some naming conventions there. And then we're going to go and we're going to make sure we don't have anything running here, but then we'll go and run a standard IS basic image. This is oftentimes a base image that we may use if we have a small website we want to add in. We can start and build right on top of that. So we're going to run the container with IIS, we'll then open Chrome, and we'll go ahead and tail out the container logs so that we can see what's coming out of that default experience there. We select those, run them, and then here locally on this machine, we run an IIS container, we pop open Chrome, and we see IIS not running locally, but running in a container. I can refresh this a couple of times, generate some traffic on the command, we see nothing coming through log wise. There's nothing there, nothing to be used. And that's very typical for Windows containers based on kind of how that logging experience works. Very typical and what we expect to see anytime we would be running, so like Docker container logs or cube cuddle logs, nothing really to see there. So that's one of those kind of gaps that the log monitor tool is going to help us with when we get into operating our own container. So I'm just gonna drop a note here removing this container in the future. And then it's on to building the Docker file that we showed a second ago. So same command as we would in Linux, Docker image build, we give it a tag with a name, and we say where's the Docker file. So go out and build and boom, it's been cached, but usually this takes a few minutes, typically longer than a Linux container, but we've built that image and now we're ready to run that image. So we're gonna do a very similar process here where we run a container and we pop it open in Chrome. So we can see that running locally and immediately we see a problem here, login failed for user entity authority. So the container is trying to go out and talk to the database. You know, we didn't change the connection string, it was set to utilize integrated windows off in that app pull identity, but the database said no, who this anonymous user is, it's not going to give them any results back. And so we have this user authentication issue that's popping up, a very common in any kind of application that may be calling out to Active Directory secured resources. So what we do is inside of Active Directory, we'd come in and generate a new GMSA. We do that with a simple PowerShell commandlet where we give it a DNS host name, some service principal names, et cetera. And then we run an install command on those servers. And now that account would be created in AD. I pre-created it for time. We then create a credential spec file, which is a commandlet that essentially talks to Active Directory and then formats a JSON file with some information for us to read the only operation, not considered super secure. It's just simply reading against Active Directory. So we see information around the domain, a lot of the same information that we would then take and plug into that CRD for a GMSA credential spec that we saw a second ago in the presentation. So at this point, we have a GMSA in Active Directory. We have a credential spec file and now we can go and run a container with that credential specs. We're going to bring the two together. So we're going to delete the existing that pre-running container. We're going to run the new one, but this time passing in that GMSA. And at that point, the container, when it calls out of the container is not going to use that anonymous ID, but instead it's going to focus on that Active Directory service account for use there. And you'll see here in a moment, we'll see the exact same website we saw earlier. Should look and feel similarly. It's the exact same source code, simply running in a container this time rather than a VM. So I can go in, I can even log in to that database where my user credential pieces are and the application itself is talking to the database and then enforcing kind of what I'm allowed to do there. So I can go and post jobs, I can search resumes, all that same experience there. The last piece to kind of mention here and show is around how we can kind of work from a debugging perspective to identify if or when we had issues from the containerization perspective. So the first piece we do is we can look at the logs. And so immediately you see that it's not on that IS based image earlier, there was just crickets. There was nothing coming through there. And but on this image, because we're using that log monitor tool, we see the worldwide publishing service entered the running state. And we're seeing information coming back from IIS directly here in that Docker logs command. So cube cuddle logs to be able to pick this up and be able to let us know what's happening inside of that container. We wouldn't have to go in and do an exec session to be able to see some information there. So I just wanted to show an example of that working in the real world. Next we're going to, I mentioned that we had enabled remote management here. And so I'm opening up IS manager on the host machine. I can go in and connect to a server. I'll give it the IP address for that container. Then I give it the username and password of the account I want to connect with. And now I'm able to use this full rich GUI experience to introspect how the IS is configured inside of that container. This can be incredibly valuable, especially if you have dozens of app pools, dozens of sites, lots of things going on inside of the container, a lot easier utilizing this than digging through kind of alternative options there. We look at bindings, different settings, authentication, everything you would expect to be able to do inside of IS we can do here as well. So at this point, we've looked at the application. The last thing to do is we'll go into a registry, a container registry, push this image into the registry and then we're ready to go from a Kubernetes perspective. We can add that image into our YAML. A cube cut will apply that to a cluster and then this image will go and be scheduled onto a node. So just to recap, we started in a virtual machine running Windows Server 2008, took that application code out on the Windows Server 2019, built an image, added IS remote management, log monitoring and then be able to push that into a registry there. So hopefully that's kind of showing kind of the workflow that we often go through when we're looking at containerizing those kinds of applications. To close up on other considerations, persistent storage is the other big one in the room that we often see with legacy apps. So identifying what kind of app you have for that application, is it persistent? Is it okay if it goes away, such as a cache? How large are we talking 10 megs? How are we talking 10 gigs? We talk about databases, file shares, local disk, knowing what the application interacts with is critical. And this can often be hard for a line of business app that was built eight years ago. The dev team's gone, it's used by only five people internally, but you can't get rid of it. It may take a little bit of time to introspect and figure out how that's working. Databases can often be easier because we can use the same connection string information. We can typically leave databases where they are. With DBs, we can containerize them, but I often prefer to leave them on the VMs that they're at today, containerize the app and then just call out the DB over connection strings. It's typically easier than moving the app and the database all at the same time. You can kind of phase it out. Sensitive values for like passwords, connection string certificates, use cube secrets for that. Another kind of standard best practice for Kubernetes gives nice RBAC enabled separation so that the right user can see the right sensitive information and it doesn't get mucked up down in your container there. So a very mature way of handling those secrets. In the Kubernetes world, when we're talking about storage for Windows, it's a little bit different than Linux. So in the Linux world, CSI, the container storage interface is much further along. That's really becoming the best practice of how to do things in the Linux world. On Windows, flex volume, typically with SMB and iSCSI are the ways to go today. Those are entry plugins and so we're hoping over the next year that this story will improve. We've got a lot of great work happening with SIG Windows around external provisioners around kind of initial support for CSI. It's coming soon, but again, we needed those, we needed privileged containers for CSI. And so we're looking at alternatives since Windows don't have that available there today. So if you're working today, SMB and iSCSI are your best bets with flex volume or utilizing one of the cloud-based providers if you're running in a public cloud already. Those are also some good options. So to summarize, containerizing legacy apps is a great way to gain agility and flexibility and be able to run older applications just like you would a brand new microservice after you're building from scratch. I definitely recommend that you start small and develop muscles around Kubernetes first. Kubernetes is incredibly powerful, but there is a learning curve. And so everybody always wants to go out and find the biggest, most mission-critical, gnarly application in the whole organization and start there. I'd really recommend more of a crawl-walk-run approach where we choose some applications that are not Hello World, but ones that we can learn and grow on to solidify our understanding of Kubernetes and of Windows Server and then go out from there. And then finally, consider things early. So think about the identity needs you're going to have for that app. Think of the dependencies you have across other resources in your network from a storage perspective, from a security perspective, from a monitoring perspective. Identify those early so that you're not right in the middle of containerizing and thrashing trying to find the things that you need there. But taking these into account, we're just very excited that Kubernetes now enables us so much more power and flexibility for those applications that may have been built five or 10 years ago to run side-by-side with an app we built last week or the week before. Additional resources, check out SIG Windows. This is the special interest group in the Kubernetes community where a lot of this work happens. They even have a full Kanban board available where you can go in and see the features that are coming down the pipe, including some documentation here on Microsoft and Kubernetes as well. Finally, we have a full white paper available that was put together from the team at Docker when we were there around delivering safer apps with Docker Enterprise and Windows Server. So it really targets this kind of scenario of legacy Windows-based applications there. And with that, I just want to thank you all for joining us today. I know it's been a whirlwind tour through Windows Server and Kubernetes, but we do have a few more minutes and so Randy would love to see if there were some questions that we can speak towards for the remaining time. Hey, Steven, thanks a ton for a great presentation. Super awesome stuff. Yeah, so if you've got questions, just go ahead and drop them in the Q&A box down there at the bottom of the window. We've got about four minutes or three or four minutes to run through some. I think we've got a good 10 or 12 questions at the moment. So let me start off with a couple of softballs though. Sure. What's the best way for people who are interested in Windows development on Kubernetes to kind of keep up with what's going on in Kubernetes and what new features are showing up, that sort of thing? Yeah, great question. That's something that I near and dear to my heart because I'm always trying to keep up to date at the best I can as well. I sent a link to that Kanban board here. This is Patrick playing at Microsoft and the rest of the Sick Windows seem to do a really great job of keeping an entire sprint planning board of what's being worked on when. So you can see the whole backlog of here's all the features we'd love to get into Windows and Kubernetes, but then they divide those out based on release. So I can see, for the release 1.18, I can look and see the exact cards that are being worked on, the exact features that are being worked on for that release. So I know what's coming versus ones that may be slated more in the 1.19, 1.20 type timeframe. That's a great way to see what's going on. Additionally, back at KubeCon in San Diego back in November, there were several sessions that were all based around Windows, Windows Server and Kubernetes. And those were some great resources to really get much deeper dives into what we were able to touch on today. But the KubeCon events have started really doing a great job at providing robust coverage of Windows Server as a topic point for a lot of those sessions. So those are two of the ways that I try to keep sharp there. Sick Windows also has a weekly or bi-weekly meeting that they host for the community. And so there's notes, there's discussion notes of every one of those meetings. There's YouTube recordings of each of those community meetings. Feel free to jump on those calls in those communities and to use those notes as ways to keep up on what are folks talking about? You know, what's working well? Where are some of the pain points we're trying to make better? It can be a great way to get a snapshot of what's happening in the community. And they do a great job of being very transparent and open with all of their work and all of their plans. Yeah, I can definitely echo the conference side of things. I actually hosted a talk by a gentleman from Docker and from Mirantis at KubeCon San Diego and it was just really fantastic. And I know this is a lot, you know, in the hopper for, you know, for the European convention coming up. Another thing too that like for me, I don't know how you feel about this, but I always like to look at the release notes if I need a low bandwidth kind of glance. Yes, that's a great recommendation, Randy. So every time there's a major release of Kubernetes, there's usually in the last several ones, there's been kind of an entire section of what's new in Windows type of things, talking about GMSAs and storage. And so that's a great quick hits way to see what the big major headlines are for it during the release. Azure ads and all that. Cool. So next one, Prometheus is hugely important to a lot of people for monitoring. How does the Windows side fit into the, you know, to the world of prom? Yes, of course, monitoring is very important. So and the prom stack, typically we're talking about utilizing the NodeExporter project on our Linux nodes. And so there is not a sheer Windows version of NodeExporter, mainly because Windows is built differently and it's a very different architecture. And so there is a project on GitHub called WMI-Exporter that's being maintained. And so WMI-Exporter operates in much the same way. So it goes and it's scraping metrics out of the Windows management interface, WMI, and then providing those onto an endpoint that's sitting there on the server that allows Prometheus to go and scrape that endpoint and bring those right in alongside all the things that it's collecting off of Linux nodes. Furthermore, from a Grafana perspective, from a dashboarding perspective, there's several really nice pre-built dashboards that are there on the things Grafana Labs or the Grafana Gallery, some pre-built dashboards that are available so that you can take that WMI information and then give you a good starting point or how you want to format and look at that information right there with the rest of your cluster information that's there. So WMI-Exporter is the main tool that we would use to be able to integrate nicely with a PromStack, deploys very similarly to NodeExporter. So nothing's too different than what we've been doing in Linux. Super cool. So open metrics adapter, essentially. Exactly, yep. Cool. And then from a compatibility standpoint, how do you manage all the version stuff and the version numbers and things? What's the easiest way to tackle that stuff? Yeah, it's something, to be blunt, it's a challenge, especially for long-lived clusters where you're updated, if you're on some annual channel and trying to update those clusters every six months, it can be a challenge. And so automation is key. So what we can do in Dockerfiles themselves, when we have the from statement, we have an image and then we have a colon and a tag. And so that tag, we don't have to hard code into there. We can use a Dockerfile argument to make that Dockerfile dynamic. So at Docker build time, I can have the option to pass in OS version A into there or I could pass in OS version B when it's available or next month OS version C, I can make those dynamic. And so if you can add CI CD pipelines with those container images, that goes a long way from making that easier. And then from a host perspective, that gets into, you know, using whatever kind of information, infrastructure automation tools you have, either some of the cloud providers, something like Packer, if you're rolling it on your own on-prem or choosing a Kubernetes platform where a lot of that kind of operational management is taking care of for you. Those are all some options to be able to make sure you're keeping up to date on the latest and greatest. One of the other easy, from a nodes perspective as well, you may just choose to just simply in the notion of cattle, just if you've got five existing nodes, you can just add five new ones, record and off all the old ones, drain them and then kind of remove those and have everything flow onto the new nodes as well. There's just different options, but it's something to think about because it feels different than what we've historically done with Linux. And that's why I like to call it out and put so much emphasis on that. So automation is king. Yeah, indeed. Cool. Well, you know, unfortunately, I think that's about all we've got time for. But I'd like to thank everybody for joining us here. The recording and slides will be online later. There's a link in the chat, but you should just get an email if you signed up giving you all the information you need to follow up after. And I hope everybody has a fantastic day. Thanks everybody. Take care. Bye-bye.