 My name is Fadi, and today we're going to talk about how to improve your IT efficiency from your hardware, software, licensing, and operations with the usage of containers. So if you manage a real estate of servers that host your organization's services, this is for you. We'll start with highlighting some of the challenges we see our on-premises customers deal with today. We'll do a quick intro to what containers are. We'll also talk about what lift and shift is. Then we'll look at Windows Server containers specifically. And then in the end, we'll finally go through a demo of containerizing a legacy.net app to Windows containers through a lift and shift pattern. Now, if you're managing a private data center where you're hosting your business applications, some of your hardware is possibly reaching end of life. You're also seeing some of your resourcing, your hardware resourcing, running out like memory and compute. So you're probably thinking, do you buy new hardware? But that can add up and be expensive. Or is there a way to improve the efficiency of your applications? Maybe there's a way to do something with your developers. Or maybe there's something else that you can do on your own. You're also probably managing your OS licenses. If you've got apps running on old OS versions, like Windows Server 2019 and 2016, you're out of mainstream support, which means you're not able to benefit from the improvements Microsoft is investing in, like OS bug fixes and enhancements. If you're running on earlier versions, like 2012 and 2008, you're past the extended support date, which means you're not able to receive security fixes. So your applications are vulnerable to security risks. And at that time as well, you're probably thinking, how do you manage your OPEX spend as you manage your Windows Server licensing costs? And in particular in 2023 and 24, we're seeing a lot of customers not only adopt more agile methodologies to improve their development efficiency, but they're also installing tools like GitHub co-pilot that helps them ship fixes and production code faster. If you're already relying on a waterfall-like development process, this bottleneck between IT and devs will only increase. So how do you manage and scale your ability to support these growing number of apps and faster deployments? And we see this problem being faced with our larger customers and also our smaller customers as well, where they have to manage applications that use different frameworks, different architectures. Some have even their own customer scripts for deployment and management. So how do you manage all that in one pane of glass? So let me walk you through what containers are if you're not already familiar with it and we'll revisit these problems from the lens of containers. Let's say we've got an app running on an iOS server and it's listening to port 8080. And let's also say there's another app on a Kestrel server and it's running on port 8080 as well. This is a problem if the ports are hard-coded, which means you can't really change the port. And it's a very common scenario that we see customers deal with. This means that a Kestrel app can't run on the same host as the iOS server is running because it's also running on 8080, both of them are. And so you'll want to remove or move one of the apps to a different virtual machine. The underlying challenge here with managing apps on VMs isn't just managing the hard core ports, but it's also managing the complexity of app dependencies that your apps running on VMs need. And also making sure that the additional resources that your VMs are using are at a minimum because there's additional overhead that you can really cut out. I'll touch on these in the next slide. In the container model, you've got containerized applications, and here I mean you're containerizing both app one and app two. And in each container, you've packaged both applications with its code, its runtime, system tools, libraries, and settings so they can both be deployed independently in isolation. There's also a neat difference here with how you manage isolation. In the VM model, the hypervisor, like Hyper-V, virtualizes the underlying physical infrastructure. So all your apps are running in the same isolation boundary. In this example, it's server 22 VM. In the container model, the container engine virtualizes the OS instead of the host. So each app has its own isolation boundary. So they're isolated from each other, which means both app one or the iOS server and the Kessel server can both be listening in port 8080. There's also kind of like a more of a higher level difference between virtual machines and containers that you've got to kind of like appreciate, which is the development process and the pattern. In containers, the best practice is to deploy one app per container. And there's a lot of benefits that come with it from this more modular encapsulation that we'll touch on in the next few slides. Now I want to just take a step back and talk about kind of like the lift and shift kind of pattern that I mentioned earlier. And the best place to start, I think, is walking you through the journey of how we were deploying applications and where we want to go. We used to deploy applications of physical servers directly, and that meant you would run a single app per physical server. And usually we found customers were able to utilize about 10% of their CPU, which is a significant underutilization of their hardware. We then virtualized those applications. So we put two to three VMs on the same machine, and we were able to see more CPU and memory utilization, for example, which obviously yielded higher efficiency. But then we got back to the same problem from the application layer, where if we wanted to maintain the same level of a high availability of your applications, you would want to run two to three, perhaps four instances of those applications each, let's say 25% utilization. And only during peak hours, would you be able to see all of your VM instances reaching 80% utilization. But for most of the time, during low hours, it would have carved out resources that you're not able to give to other applications. Sure, there are workarounds where you can dynamically reallocate your memory, for example, but there's risks that come with memory leaks and memory fragmentation that you need to mitigate. The second point here is the more management required for the applications that you're deploying on VMs. Let's say there's an instance of an IS server inside a VM, another instance of a SQL server inside the second VM, and the third is an instance of another middle tier and a third VM. Each one, if it comes with its own configuration, deployment process and bespoke scripts, that exponentially increases the operational complexity to manage for applications. When you multiply that over 100 apps, then it becomes a management operations questions instead of an automated process where you're able to standardize the application deployments. Now, on the containerized apps pattern, like I mentioned before, when you deploy to containers, the best practice is to package each application into its own container. And in application one, for example, let's say during downtime, you're running the application across two containers. And during peak time, you're skilling that up to six containers. Now, this is actually a great thing because during most of the time, you've got the resources they would have allocated to the four containers, freed up for other uses. And so that means you're only using the resources that you need instead of carving out the resources for VMs kind of like in the fixed manner over the entire day. The other benefit that comes with containers is the better improved app portability. So you're able to move to newer OSes and you're able to manage your OS licenses because you don't have to deploy on legacy operating systems. And that reduces the number of OSes from the application layer. And it's easier to deploy that same application between on-premises and cloud because, again, you're encapsulating the application in a Docker file, for example, and you're guaranteeing a consistent behavior no matter where you deploy it. And the lift and shift model specifically, this is done with the least amount of code changes, if any. And so I'll expand that a little later, but we do see that as the most popular way to start taking advantage of the benefits of containers while you have all these existing apps already in your on-premises. Speaking of app portability, I want to walk through kind of like a typical journey of migrating an IS server and compare that to what you would expect from migrating an application already containerized. So let's say you've got a simple app and you've got a new server. To migrate that, you just want to install the IS server on the target server, import the files with this IS configuration and that could be done manually or using the web deploy. Great. If you're migrating a simple app to a server with existing apps, you might run into similar situations like the port conflict that we talked about earlier. So you want to make sure that the configurations aren't really in conflict with each other or the different applications. If you're migrating, though, a complex app to a server with existing app, which is perhaps typically the case, your web conflict is not going to catch all the dependencies that you need to manage and you need to port over. You'll need to review your log files and find all the other dependencies that are mentioned there and also make sure that your third-party apps can run in the target server as well. And the thing that's highlighted here is it's manual, but it's never really perfect because you can't really guarantee that the blueprint that you're following covers all the edge cases that are applying to you. And this situation doesn't only apply to web apps. It also applies to any app and third-party app. When you compare that to a container app, like I mentioned before, the environment is already predefined. You've encapsulated all of your dependencies, so you're able to run that same application on your host, on your clients, like on your laptop, or on a server, be it on-premises or on the cloud. You also get kind of like benefits from an operations perspective when you're able to standardize your applications. I want to share when I was a software engineer, I spent a lot of time recreating the application environments on my local machine so that I could replicate edge cases but also develop these new features. So I have to, for example, provision a virtual machine with a base OS, install the OS, configure the settings for that server, install my app along with its own dependencies. And any time I need to update the application, I would have to log into the VM, make the changes, make sure and make sure that the changes aren't really affecting the OS or other applications on that server. With containers, you can pick up basically any device, write a Docker file that specifies the applications, dependencies, and build a container image that you can deploy as we mentioned before. And because you're deploying faster, you'll have more time, especially your engineers, to focus on features and bugs to improve their own customer experience. We work with a lot of customers who go through this migration journey as well. And Relativity puts it really well in a case study that we worked on together. Relativity is a market leader in legal e-discovery and compliance software. They saw their deployment process to VMs with slowing them down, especially from an innovation perspective. And after adopting Windows containers and Kubernetes, they were able to ship features from six months to within the same day. Now, we talked about containers and we talked about lift and shift. I want to bring in Windows Server containers into the picture. If you're not familiar with us already, it's a stripped-down version of Windows Server that's optimized for running applications and services. It's been a feature, production feature, out of Windows Server for eight years. And it's been a production feature or a GA feature of Kubernetes for four years. It's got three types of Windows Server images, which are Nano Server, Server Core, and Server. And there's one mainstream supported OS, which is Windows Server 22. As I mentioned about the kind of a mainstream server supported OS, Server 22, we're constantly investing in improving the performance reliability and the feature set of Windows Server 22. And so this latest mainstream supported OS came with significant image reduction, which we have a blog out about, efficiency improvements with networking, IPv6 support, and many more. And we're excited about Windows Server 25 coming soon. On the use cases side, we see that both IT and engineering use cases for Windows Server, we touched on IT, but kind of like summarized together, there is the angle of controlling your costs by running fewer OS instances, running more apps on the same hardware, so increasing utilization, and doing the lift and shift path of existing apps to Windows containers. One of our partners, Unify Cloud, they use, they develop a tool to help with migration, in addition to our other partners like Cast Highlight, they reported they found that 80% of the apps they've assessed are legible for migration, like lift and shift, without any dev involvement. On the developer use case, because you're encapsulating your application, there's easier app dependency management because everything is kind of already written in the Docker file. As you explore further modernization of your application and infrastructure, you can leverage new patterns like microservices and DevOps, continuous integration, continuous deployment. We also get a lot of questions about what apps are supported on Windows containers, and it's really hard to count and talk about all of them, but the main ones that I wanted to surface is .NET, 132, there's different Windows Server types, and also commercial off-the-shelf apps. The one note about .NET is the minimum framework that we recommend is 3.5 because that is still in support for a little under five years now, and if you want more information on use cases, you can find that in our documents with the link at the bottom of the slide. I've talked about kind of like our work with our customers, like Relativity, I also want to talk about Xbox with their Forza 5 launch that happened recently. So the Forza 5 was the biggest, the launch of Forza 5 was the biggest kind of like first week launch of Xbox and Xbox history. What they were doing with Windows containers was they were migrating 17, a lot of core services of which there were 17 services in particular that were using 40 VMs that were running cold most of the time, and these VMs were supporting core functionality like helping players to sell cars, for example. And at the launch of Forza, we had over 1 million concurrent users and this was across over the entire week. And so Xbox reported that they never saw the latency across all their 17 services to exceed 100 milliseconds. And so the point with the Xbox story here is they were managing massive applications and that have been running on for a while. They were using the same lift and shift model that we discussed. They took code running in a VM to a Windows container and they got the benefits of containers, Kubernetes, and the cloud. So we talk about lift and shift, but you might be familiar as well with other modernization terms like refactor, re-architect, and rebuild. Refactor essentially means you're migrating an app from running on a VM to a container. Some motivations behind that is immediate benefits from app efficiency like we talked about from containers, but also breaking your applications all together under one control plane. Let's say you've got some applications running on containers already and some are already on VMs. If you're bringing them all together under one control plane, they're all able to benefit from the same DevOps pipeline, for example. Refactor is just a bit more work where you're doing some code changes to benefit from, for example, Kubernetes. Re-architect is really breaking it down into, for example, microservice architecture still using containers and then rebuild as a complete rewrite of your application. So it's cloud native. Now, throughout this whole slide, we've been talking about lift and shift and we're going to do a demo walking you through how to containerize a legacy app to Windows containers. But before I do that, I wanted to highlight the tooling that you have available to you to do on your own. So we've got Azure Migrate where we do a step-by-step on how to have you containerize a legacy application to Windows containers on different Azure targets. We've got Cast and Unify who do code scanning as well. And they're able to help you break down what are the containerization blockers and prioritize amongst your portfolio of apps which ones are the low-hanging fruits. Highly recommend to check out the mounts. I'd love to show you a demo. I'm going to switch to my dev box and walk you through how to containerize a legacy application. Awesome. So we've now switched to my window here where I've got the eShop app to help repo opened. This is a reference legacy application published by the .NET team to kind of like walk you through how to manage legacy apps. They've got three different sort of versions of the same app. One's running with MVC. There's another N tier one and there's another using the Webform version. I'm going to choose the Webform app because it's using the ASP.NET version 4 which means it will not run on .NET Core unless you rewrite the code. So it's a hero scenario for lift and shift migration. And I'll run it in an IS server using mock data on port 8081. I've actually got it running right here where we can check it out and see what it does. So it's basically a catalog manager. You know, you've got a bunch of skews, search information and you can edit the information here and perhaps like maybe I'll give it a new title just so that we can follow through the example test for example. And I'll save it and if I refresh it will still show that this skew name ends with test. Now this application is running on an IS server. I want to migrate it to a Windows container and one way to do that is using the Docker tools in Visual Studio that comes out of the box. I'm using here the Visual Studio 22 and I can add Docker support by selecting that option over here and it will create a Docker file for me. Now what this Docker file means is it's basically using the base image which is pre-made by Microsoft and includes the ASP 4.8 pre-installed with it. It's just referencing the directory inside the Windows container and it's going to be copying the published applications in our local directory over here. And if I want to run in a container I can just select this container button to run that Docker file. You do need a Docker desktop installed with Windows containers running on your local machine. Great. So this is the same application running on a Windows container. One thing though to mention is this Docker file is running Server 2019. So I can basically change it to Server 22 to benefit from kind of like a smaller image and improve network stack because we've been investing a lot in the improving deficiency performance of the network stack for your Windows 22 workloads. Great. So I just showed you how to run the same application on Server 2019 and Server 22. Now sometimes your application might be running on a container but it might not be ready for kind of like the next step if you want to run it in a container orchestration. Kind of like framework. So Azure Migrate has this amazing tool called Azure Migrate the app and code assessment tool over here which you can download it from the Visual Studio Marketplace. And I've already installed it on my Visual Studio so I'd love to show you kind of like how it works and how you could use it to help you figure out what are the code level changes or configuration changes that you need to be aware of before you publish it to Azure App Service or AKS for example. So I come back here and I select Replatform to Azure and I can create a new report and I can select a lot of targets for example and let's say I want to understand what are some of the considerations I need to make for AKS Windows. I'm going to close this so I have just a bit more space. Amazing. So I select Next. I select the eShop app that I want to assess. I can choose between source code and or binary dependencies. I'm going to choose a source code because that's something typically I have control over and I'll select Analyze and it will create a report for me. Now the report here gives you a kind of like a snapshot of what are some things you need to be aware of in this migration path. I've assessed one project, a couple of issues and there's about 55 story points broken down amongst the eight issues in 19 instances. Just kind of help you prioritize and understand the amount of work we think is needed to make those changes. There's one mandatory issue that the tool has recognized. Actually let's actually dig into the issues and see what we should be aware of. So I see that there's a logging to local detected and that's in the log for net XML. That's probably a third party tool so we probably can't change it. I do see though that there is a session state stored process that is detected and when I look at it here it's highlighting that the memory is stored in the session which basically means if I made that change of editing that skew by adding the test name that would be lost as soon as that container is shut down and the nature of containers are ephemeral so they're not supposed to be long lived and so we should be building the application or deploying these application containers expecting that they'll be shut down and spun up without impacting the customer experience and so that is a very reasonable concern and edits to make for your application. Now let's say I've finished understanding the issues and I've made the changes to my application and I'm ready to kind of like publish to Azure so now if I want to publish this to Azure I can just come here select my target in this case it's Azure and I can publish directly to App Service this won't be through the Dockerfile or using the Windows container story that we're talking about that you can deploy directly to Azure VMs but if you want to leverage Azure Kubernetes services on the cloud or on-premises App Service and ACI for example Azure Container Instances you would want to publish first to the Azure Container Registry so that you can upload basically your Dockerfile and all these services that use your Docker that will use your Docker image can pull from their container registry. Amazing so we just went through a demo where we looked at a legacy app using webforms so asp.f4 and we deployed it locally on an IS server and then we containerized it using Docker tool on Visual Studio we assessed it we looked at some of the containerization or PAS blockers to deploy your application in a container in production environments and then we talked about what is the process to deploy your container app on Azure be it AKS or App Service Now for next steps if you want to learn more about Windows containers we've got our documentation on aka.ms slash containers we've also got learning modules where you can walk through kind of like all the flows to setting up your Windows containers but even manage it and monitor it we've also got our blogs where you can keep track of our latest improvements and enhancements to the platform and if you have any questions please reach out to your cat manager but you can also raise a question on our GitHub repository slash Microsoft slash Windows containers thank you so much for joining the session we hope you enjoyed it please fill out the evaluation form and we look forward to seeing you in the next session