 Thanks everybody for showing up. This will be a talk on Windows Server Containers for application runtime. Hopefully you're in the right place. In case you haven't seen this yet, there's fire exits on either side of the room or at the back. So just be aware. I'm Sanjay. I'm on the Garden Windows team out in New York. Work for Pivotal. And I'm Matt Horan, a software developer at Pivotal. So what do we do? What does the Garden Windows team do? We're formerly the greenhouse team. That team was tasked with doing all things Windows in Cloud Foundry. But Garden Windows, what we do now is just containers. All we do is make containers. We also contribute to the application level runtime bits for Cloud Foundry in terms of the Windows side of that and also for .NET as well. For as a resource in the Cloud Foundry org, we also help out other teams when they need to write software for Windows. We lend our Windows expertise and Windows knowledge so we can share that within the Cloud Foundry organization. So why? The charter of our team, as well as the Bosch Windows team, is to expand and help out with the .NET experience on Cloud Foundry. And we do this by leveraging our knowledge about Windows. We want to improve the .NET developer experience and Windows operator experience for Cloud Foundry. We want to reduce the burden that it may exist for leveraging Windows within Cloud Foundry. And of course, we want to help CF development teams get better at using Windows and developing for Windows. So what have you been working on? So the high-level things that we've accomplished in the last year include build pack improvements for Windows, maintaining the legacy Windows 2012 R2 stack. Legacy, you say? Well, the Windows 2016 stack was released as generally available just this year. So we're really excited about that. Stem cells are available for the public IAZs, GCP, and AWS on Bosch.io. Sadly, AWS is not supported at this time. Please complain loudly about that so we can get support on AWS. And in CF deployment, support for Windows Server 2016 is no longer experimental. So we're completely fully supported in open source CF deployment. And we've also been exploring high-level pragmatic parity features with Linux. And so we'll talk a little bit more about that. Initially, we did not support the build pack app lifecycle on Windows cells. So you could just push a Windows application and we'd kind of figure out how to run it. But now we actually have full build pack support. And we've converged with some of the Linux features that have been introduced in the last year. For example, profile.bat.not.sh support and profile.d support. So you can extend your applications with maybe an APM agent or some other binary blob that you need to inject into your application. You can all do that with the profile.d script. And now it's a lot easier just to build these extensions. We can have third parties build their own build packs and bring their application support to the platform. We've also added context path routing support to the HWC build pack. This was an off-requested feature from a lot of customers with production.net workloads. A core part of our work is also maintaining the Windows 2012 stack. We haven't been doing much active development on this, however. Just small reliability improvements and bug fixes. Because we've been encouraging people to move to the 2016 stack, any new platform features or any pragmatic parity features are being targeted for the Windows 2016 stack. And this is because of the inherent limitations of the containerization on Windows 2012. There's worse, I guess you could say, isolation and resource limiting on Windows 2012. Containers share the host registry, the host file system. There's no network isolation, no ability to limit bandwidth, and there's a very significant vulnerability. If you have a process that starts up a console host, it's actually possible to escape the container. This is mitigated on 2016. It's also mitigated on 2012, but in a reactive manner, we have a piece of code that actually guards against that. But we don't feel confident about giving this level of isolation to customers and putting our super confident stamp on it. So we want to encourage people to move to 2016. So the 2016 stack allows us to adopt some of the well-established CF development practices. We've been working closely with other standards, the OCI standard, for example, and moving towards a closer collaboration with other core CF teams. So I see some gardeners out in the audience here. We initially started implementing support for 2012 by just running our own implementation of the garden API. In doing 2016, we worked closer with that garden team to come up with a new abstraction that made it a little bit easier to share some more code. Sanjay will talk about that a little bit later. But ultimately, the 2016 stack allowed us to improve the experience for .NET developers and operators when it comes to the applications they're putting on the platform. So initially, when we started implementing this stack, we started with what we called the 2012 R2 parody. So we only supported, well, we didn't even support the BuildPack app lifecycle. We just supported pushing a bunch of code to the platform and we figured out how to run it for you. Eventually, we did bring the BuildPack app lifecycle. And today, that is the only lifecycle that we support. So you cannot push an OCI image or Docker image at this time. We'll talk a little bit more about that in a bit. We also decided to bring in support for application security groups so that your applications would be isolated from any other parts of the platform that they shouldn't be able to talk to. And we brought resource limits for memory and disk. We really targeted allowing you to push the same class of applications to the Windows 2012 stack in the Windows 2016 world. So we weren't really thinking about any new features that we could bring to the platform. We just wanted to get everything working. So with that, we get some automatic improvements over Windows 2012. We actually have file system isolation. Each container runs in a read-write volume that is unique to it that is not shared with any other containers or the host. We actually run containers based on a container image so we can decouple the application, the common application dependencies from the stem cell and they can now just be in a container image. It's following the same pattern that we love on the Linux side. We also have real bind mounts and 2012 R2 bind mounts are just implemented as sim links, which brings its own set of problems. And each container has its own registry. Now each container, if it relies on writing things to the registry, reading things from registry, you can do that in a more isolated manner. We actually also have more process isolation, real process isolation. We can set CPU limits on containers. Users are unique to containers. Users are not able to see anything on the host. We can make a non-admin user inside of our container. We use the username VCAP to follow the common patterns in Cloud Foundry. And we can actually, any container process cannot see processes on the host. Everything is isolated inside of the container. And perhaps the biggest one of all is the network isolation. Each container has its own network, quote unquote, namespace, it's called a compartment in Windows and we can actually do a bandwidth limits on these. And this, by far, I think is the most important is a bit of improvement that we have. So this is just a diagram of the components on a cell in the typical Cloud Foundry deployment. Got the Bosch agent controlling everything. Got the shared components between Linux and Windows. Same code bases, but compiled for the different operating systems. We got the REB, the Metron agent console and the route emitter. And there you can see the larger box for Guardian, which is the same garden server between Linux and Windows with a few differences. The main components of Garden that are important are the three plugins. So there is an image plugin that sets up a container file system. Ours is called Groot Windows on Linux you may have heard of GrootFS. Then you have the runtime, which actually runs a container. Linux uses Run-C, which is a common OCI runtime plugin that is used all over the place in container D. We have a similar one called Win-C. Does the same thing on Windows, talks to the host compute service to run a container. And then in Cloud Foundry we have either the built-in networker called Kawasaki or Silk, which is the default network CNI plugin in Cloud Foundry. For doing any sort of application security group and container networking, we use a plugin called Win-C network. And that talks directly to the host networking service inside of Windows to set up all that networking. There would be a box for the group that was just discussed, but I guess I deleted it. So we've been targeting the Windows Server semi-annual channel release, which is a little bit confusing for anybody who's been in a Windows operator for some time. This is a concept that was released in, well, starting in Windows 10, but then also came over to the Windows Server world. And so we decided to jump on the semi-annual channel release for a few reasons. We've been actively collaborating with Microsoft to get improvements to containerization integrated into the Windows kernel. And those changes show up in the semi-annual channel, not in the release that came out in 2016. We also see other improvements coming down from other Microsoft partners being integrated into that semi-annual channel. For instance, we now have sidecar container support. What this allows you to do is run processes that are not the primary purpose of your container in a container attached to the same network, but not using the same resources. So you can apply different resource limits and also isolate those processes. For example, maybe you have a health check that makes sure your application is running. Well, that doesn't have to use the resources that your application itself uses. But Microsoft has essentially been moving towards the six month release cycle for these major Windows releases. And at the end of this year, they'll be releasing new, what's referred to as a long-term support release, Windows Server 2019, coming out in 2018. And that'll have some of the new features that we've seen come to the platform over the last year as those first class supported features in the platform. So this will be an area where we're looking for feedback from the community in terms of, should we stay on the semi-annual channel over time, or should we move back to one of these long-term support channels? So we'd love your feedback on that. So I'm going to do something crazy. I'm going to try to do a live demo. These always go well. So let's see how that goes. All right, so can you see this over here? Yeah, you can. Awesome. So what I've got here is a virtual machine running Visual Studio 2017. I've got a sample application on here that I've published and pushed to Cloud Foundry. When I pushed the application, I also pushed a couple of other things that will be of particular interest for this demo. So what I'd like to show you is remote debugging of an application running on Pivotal web services. So this is Pivotal's public deployment of CF deployment and all of the open source components that we contribute to. So on PWS, there are Windows cells for internal Pivotal use only. Unfortunately, we can't open that up to everybody just yet as much as I would love to. It's not quite there. But I've pushed up this application and along with it, I've pushed the Visual Studio remote debugger. It's a X64 app. So I've copied this X64 app, MSVSmon, XZ, and all of its dependencies into the published directory of my application. And I've previously CF pushed it. So it is running. I'll show you it running over here. So here it is. It's just a Fortune Teller app. It's got a front end that talks to back end and tells you your fortune. And so now I'm gonna use CF SSH, which is a new, well, it's not new to the platform. It came along with Diego, but it is new to Windows. That allows you to SSH into an instance of your application. Now I only have one instance of my application right now, so when I see if SSH, I'm gonna go right into that one instance. Along with this command line, you'll see that I have a dash L 4022 127 001 4022. So for anyone who's worked with the remote debugging tools, you might know that 4022 is the port that the remote debugger listens on. So now if I forward this port, and if everything works, I will have a connection to my container forwarding that port from my local host to the remote. And if I take a look in my container, hopefully the MSVS mon is already running. So I started this up earlier. Oh, is that too small? I don't know how to make this bigger. Sorry. So if I go back to Visual Studio, I can attach to a process. And it hopefully remembers that I did this just before coming into this room because I did not trust that this would go well. But it went well. It connected. Here it is. So if I hit enter here, I think I attached to the wrong process. So it is connecting to the port forward that I have going into my container. Here I have an hdbc.exe, which is the hosted web core exe that's running this sample application that I've published. So now if I click on that, my remote debugger will connect to that instance of MSVS mon running in the container. It's going over the conference Wi-Fi, over the internet to server somewhere in Virginia, over SSH, it's a little bit slow. But eventually it'll come up. If I go back to my application and reload it, hopefully I'll land in this debugger. So this is just kind of an example of something that you can do with CFSSH and the other functionality that's been enabled by the 2016 stat. But here you go. One of the things you can do is you can launch that remote debugger. And now my application's misbehaving. Okay, well, I don't know what's going on so I can jump in here and poke around at the actual code that's executing in this instance of my application running on Cloud Foundry. I don't know how this is gonna go. So I'm gonna cancel. Oh boy. I'm gonna click continue and my application should finish loading over in this window, hopefully. Or maybe I broke it. Oh, I think it did finish. Well, I think that went about as well as a live demo can go. So I'm pretty happy with that. Yeah, I totally broke that application. So yeah, that happens. All right, just gotta get this set back up. Now, where's my presentation? It's presenting somewhere. I think it's presenting there. So some stuff that still needs some improvement is, we need, we've been working with Microsoft and talked about adding. They cannot see this. Sorry about that. All better. So we've been talking with Microsoft and as Matt said, we wanna be able to get some of our feedback into some of the semi-annual releases. And one of these things that may never be fixed in Windows but we hope it could be. So the common deficiency between Windows 2012 and Windows 2016 is that you can actually exceed your memory limit allocated to your container by creating a memory mapped file. This is something in the Windows kernel and artifact of job objects, something that may never get fixed but it is a fact of life for now. One thing that we do hope gets added to the Windows, to HCS Gym, to the Windows kernel is the ability to limit the number of processes in a container. We could have a reactive implementation that does something similar to what our guard does on Windows 2012 but we would like it to get into the actual kernel. Containers are also semi-privileged and what I mean by that is if you were able to get administrator level access as a user inside of a container, you would be able to actually affect some things on the host. You can undo some of the file system quotas that are set on your container volume and some firewall rules that are set up for administrator users would actually also set up for users that are not administrator users would not apply to you either. So this is hopefully gonna be mitigated in the future when we move over to Windows Server 2016, the 1803 release where we use the native HCS Gym access control lists for our networking implementation. And also Matt mentioned the fact that we do not support CF pushing Docker images at the moment. This is a pretty big sticking point. This is something that people really wanted to start to use, however, there's a really tight coupling between the container image and the stem cell that you run a container on. So for example, if I am trying to run a container based on a 1709 kernel container image, but I'm running on a VM that is based on Windows 2016, the long-term support, the Windows will actually tell me that I cannot create a container based on this container image. So you can imagine that this would stop us from providing the zero downtime upgrade guarantees that we want to give with Cloud Foundry. You'd have to have a very complicated deployment where you deploy a whole bunch of new cells, migrate your apps over, plugin deploy. And that's just not really the experience that we want to introduce with this feature. We want people to be able to just CF push their Docker images and let them live on the platform. So if we were to stick to a long-term support release that has all the features we want, we may be able to start to introduce this, but we cannot at this time. And so in addition to getting parity with Windows 2012 R2, we wanted to get pragmatic parity with Linux. So we wanted to enable existing features of the platform for dotnet developers and improve their experience. But we also want to set ourselves up for the ability to integrate with new features in a timely manner, not be chasing Linux. And we want to be able to smoothly be able to work with that, just compile the code on Linux and Windows, and then it should just work. So some of the existing features of the platform we've been working on is enabling volume services, particularly enabling a SMB volume driver and volume service. We're working with the Diego persistence team lending our Windows knowledge. So they've been working on that. They're pretty close to getting that done. We've also been exploring container-to-container networking. This is a really big, rich track of work, very technical track of work. And we've been working closely with the CF networking team to try to integrate with Silk, build a CNI plugin that we can use since Silk is similar to Flannel, but which is used in Kubernetes, but we want to build one that's specific to Cloud Foundry. We've also been talking with the route integrity team, the routing team about integrating with the Envoy proxy on Windows. Envoy is not currently supported on Windows. We've been working with them to lend our Windows knowledge, to build it for Windows and see if we can integrate it in the platform, working with them and Diego to see where this takes us, see if it even works, see what the effort will be to get it working. And in addition to all these kind of platform features, there's also kind of a more opaque feature to the user that the garden team is starting to use container D for their runtime back end for garden. And we want to be able to integrate with that as well. We want to get the same benefits that they're getting there. So we're trying to contribute to the open source containers community, contribute to container D and build a win-see-backed runtime for container D so that we can have the same integration, have the same benefits with Windows and Linux. Cool, so at a high level when we think about our roadmap, we're really focusing on three different types of applications. Enabling green field application development and deployment right to the platform, that's easy, right? If you're building a cloud native application from the ground up and you're leveraging Steeltoe or some other framework there just to build your application and run it on the platform, it's super easy, right? But that's not every application. It's only a subset of your applications, right? As an enterprise. And so we're also enabling the lift and shift of certain applications to the application runtime. And that means extending the platform with certain bits of support for those applications. For example, integrated Windows authentication. This is something that we have not supported to date but is an impediment to lifting and shifting a lot of applications that are already out there for running on your internet. And so those are the kind of things that we're thinking about that we want to enable immediately on the platform. Then there are things that we're not so sure about. And so when it comes to certain classes of legacy applications or commercial off the shelf software, really weird exotic deployments that don't fit into the application runtime model. Well, this is where we're kind of thinking what makes sense. And this is really where the container runtime comes into the picture. And so the Garden Windows team has lent a lot of its knowledge and experience in working with Windows server containers to the container runtime team. And so now we're working on trying to get CFCR running on Windows. And so that's sort of a long-term thing that we're looking into in order to enable this class of applications that don't necessarily fit in to the existing application runtime model. All right, that's all we have. So we're always hiring at Pivotal in New York. So you want to come work on Windows, work on Windows containers, give us a call. We love pull requests, love feedback. So if anyone wants to start using WinC, start using Guardian on Windows, start deploying Windows 2016 cells with Sea of Deployment, definitely do that, give us some feedback where it's good, where it's bad. We'd love to hear from you. Any questions? Sure, so for Pivotal web services, the timeline is unknown. So there are two kind of modes of operation for Windows server containers. There's a shared kernel, which is what we currently support. And then there's Hyper-V isolated, which we do not currently support. The shared kernel isolation is really not suitable for an environment where anybody can just push their application code and run it. There are still known exploits and still things that we cannot protect against, but Hyper-V does provide that level of isolation. We don't currently have support for Hyper-V, but if we do look into supporting that, that would be the time when we get supported on PWS. But if you do have a relationship with Pivotal in particular, we are exploring ways to open up PWS access for certain customers of Pivotal. So that is a possibility, but not for a general developer use, sadly. Yeah, Hyper-V containers are very, very heavyweight. They're basically VMs. So that's why we haven't invested in those yet, but if we see feedback that we need that sort of isolation, I think that's something for the future. It's legal and licensing. So we've been in touch with Amazon. Other customers have been in touch with Amazon. Other people complaining on Microsoft forums have been in touch with Amazon. And so really, the thing that will push this over the line is more people complaining to Amazon and asking for support. So if you want AWS support for 17 or nine images, go complain to Amazon. Add yourself to the list. What was that? Are they in Azure? They are. Azure, GCP, those are on Bosch.io right now. Anybody else? All right, thank you.