 On this week's Visual Studio Toolbox, Aditi Dugar is going to finish our two-part look at how you can containerize your existing.NET applications. Hi, welcome to Visual Studio Toolbox. I'm your host, Robert Green, and joining me today is Aditi Dugar. Hey, Aditi. Hey. This is part two of our chat about containers. Exactly. In part one, we looked at adding container support via Docker to an existing web forms app we used. The whole idea is that we've talked quite a bit about containers in the show lately, in the context of microservices, and the smart hotel 360 app. But this is our opportunity to step back and do, again, two things. One is do a more gentle introduction to containers, to the extent that's possible. Step back and we talked quite a bit about what they are and why, and then focus on existing apps. You've got an existing web forms app or MVC app, you've got an existing WCF service, all the code that people built earlier and are still maintaining, and how does containers fit into that story? Exactly. So that's what we're doing. In episode one of this, we created a container, we got it running locally, which is great. But now the question is, what do you do with it? How do you get that container out so that it can be used in production? One way of course to run it in Windows Servers. So you don't need the Cloud to run containers, you can run it on Windows Server, or of course you can get these things up and running in Azure, and that's what we're going to talk about today. Right? Sounds great. Excellent. Awesome. So I think the first thing we're going to talk about is orchestrators. So the first thing you want to think about when you're deploying to the Cloud is what orchestrator I'm going to choose. An orchestrator is basically a platform that's going to help you manage all aspects of deploying and setting up your containerized application. Okay. So there's a lot of different orchestrators out there. There's Kubernetes, there's Service Fabric, there's Docker Swarm, there's all sorts of options, and all of them have their different pros and cons. But today I'm going to highlight a couple, and those are managed Kubernetes in Azure Container Services and Azure Service Fabric. Okay. And the main difference right now between those that we're going to highlight is just that Azure Container Services is a little bit more mature for Linux containers versus Service Fabric as it's part of the Microsoft ecosystem is a little bit much more on Windows. But really it doesn't matter which one you choose, both of them will work just fine. Right. So I think we talked about this last time. If you're kind of new to containers, if you're doing .NET Core, so it's a brand new app, .NET Core, can run on Windows, can run on Linux. You might choose Linux containers. Exactly. If you're using Kubernetes, they're smaller, they're I don't know performance wise if they're faster, but they're certainly smaller and quicker to create. So your dev work is a little bit faster. And then if your .NET Core app is running on both, then you've got the flexibility there. Exactly. Obviously, if you've got an existing Web Forms app, you don't want to have to rewrite it as a .NET Core app. So that you can run it in a Linux container, so that you can use Kubernetes. Yeah, exactly. And we're building on the scenarios that we talked about in the last show, which were really your existing .NET applications. So Windows containers. So we do have Windows support for Kubernetes, which is there and improving, and will continue to improve quickly. Exactly. And so today for the demo purposes, I'm going to talk about Kubernetes in Azure Container Services and focus on that in the demo, but we have both walk-throughs highlighted online, so you can follow them step by step if you want to. So Kubernetes is an orchestrator, and Azure Service Fabric is an orchestrator. What's Azure Container Services? Container Services. Azure Container Services is basically the Azure environment in which Kubernetes is going to work. So you basically will set up your resource group in Azure, then you'll set up your Kubernetes cluster and connect that into Azure Container Services. But Kubernetes is the actual orchestrator that you're using. Okay. It's confusing, I know. Yeah. So I want to talk quickly about just the benefits of orchestration and why you want to use it. Yes. It's especially important when you have things like microservices in your application, but even for smaller applications, it can be really convenient. You can have automated deployments and replication of your containers. You can easily scale in or scale out just online, which makes things really easy in terms of deployment. Load balancing, so you can load balance between all of the containers and pods that you have in Kubernetes pretty easily. You can have rolling upgrades. It's a lot more resilient because you can automatically reschedule failed containers. So you don't have to worry about, oh, my container failed and there's going to be a break. You can automatically make sure that there's no breaks in your service. And then lastly, you can have controlled exposure of your network ports so that people outside of your cluster, you can kind of control that exposure. So it really turns out that containers is really DevOps. Yeah, yeah. I mean, that's a great benefit. Everything you just said, I know, like me as the developer, I'm like, yeah, it's great, that's all happening, but I'm the dev, right? I built the app, right? I handed over to the ops guys to do that. And I know the DevOps whole concept is to shrink that barrier and merge those two worlds. Exactly. But in our demo we're doing here, and we did previously, we didn't touch the app. Nope. So we put the app up and running in a container, added the Docker support, which made the container out of that app, and then you go and manage it. Exactly. Yeah, that's a great point. I mean, if you remember in the first episode actually, the first picture that we showed was really calling this sector cloud DevOps ready and it's really focused on highlighting the fact that this is just really, really helpful for DevOps. And you look at a lot of surveys and people why, you know, what they're interested in containers and you find that they're really interested in the DevOps side. Exactly. And when you use containers for DevOps as a way of more easily packaging up an application. Yep, exactly. Cool. So we're gonna get into the demo now, but just a reminder of where we were at when we ended last time. So we had a application with a WinForms front-end, a WCF service in your middle tier, and a SQL server on the back end. And we went through and we containerized the WCF service and the SQL server and got those running locally on our machine. Okay. So, let's go into, basically if you want to deploy the Kubernetes, there's two really high level steps. The first is setting up in Azure and making sure your Kubernetes cluster is set up and deployed to Azure. And then the second part is deploying your application and all of the resources needed into the Kubernetes cluster. So today I'm gonna focus on kind of that second half of deploying your application actually into Kubernetes. But that assumes that you've gone through and in your Azure portal, you've created that Kubernetes cluster and you have that all set up already. So. What you can do in Azure. I know you can also do that in Kubernetes. Kubernetes has a whole web-based UI for setting up clusters and whatnot. You can do all of this through Azure. You can do all of this through Azure and set up the ACS that you need as well. And if you want step-by-step instructions again, we have all of that. Layed out on our eShop on containers repo. So I'm not gonna go through it today cause it would take too much time but you can definitely follow this through and be able to walk through step-by-step. Okay, cool. And so there's a few terms I wanna bring up in Kubernetes before we dive deep into it just so we're all on the same page. So the first is pods. So pods are kind of the basic unit in Kubernetes and that's really the pod host to your container. And so you could have multiple containers in a pod or you could have one container in a pod but that's really the environment for your container. And then there's a service and the service is really helping you network between the different pods. And then last- If you have two services and two different containers they need to be able to talk to each other. Exactly. Just like we have our SQL container and our WCF container and we wanna make sure all of those services are set up correctly. And then lastly, there's a deployment. So the deployment is really gonna help you automate that setting up, the replication, the deletion, the creation of your pods. So those are the three key terms that you kind of have to know going forward. So the first step in that second part of the process would be to create our Kubernetes deployment files. And this is kind of similar to what we did with our Docker files. So there's a couple of files that we have to create. Again, text-based, so super easy to create. But those are really just describing the step-by-step instructions for Kubernetes to make sure the deployment goes smoothly. So I've created this folder here in the root of my project. So this is where my project is and I've created this new folder and the folder hosts a couple of different files. One for the SQL container and one for the WCF container. And they are YAML files. They are YAML files. YAML, YAML, yet another markup language that's literally what it's doing so far. Yes, yeah, exactly. Which is hilarious. I think that's great. So let's go into both of these and I can walk through what's going on here. So first here we're specifying that it's a deployment. We're giving it a name. And then everything under spec here is just describing this deployment. So we have it labeled as a SQL data. So we're working on the SQL one first. And then here you're gonna see this looks really similar to what we did in our Docker file. So you're just specifying what image you wanna use again. So that same SQL server windows image. And then you're defining some environmental variables. And then lastly, this part is important. So this is saying that you want your pod to be attached to a Kubernetes node and you want the Kubernetes node to have windows running as the operating system. So that's where I'm specifying that. And this is kind of one of the many tips and tricks is that when you create containers, you need to be clear if it's a windows container, you need to tell the environment hosting it that it is. Exactly. If you get this running up in like an Azure container instance, it's very easy to tell the ACI that it's a Linux container when it's really a windows container. And then, oh, what do you know? It doesn't work. It takes a long time to copy the file up. So you learn a long time after that that it didn't work because you told the thing hosting the container that it was a different OS. Right, yes, exactly. So key to put windows here. And that will help you a lot. That's what's so happy to me, but I've heard of people who after they did that a couple of times learned to never to do it again. Yes, exactly. That's what I'm told. And so then we've specified that this is the end of the service here and we can define another one right in the same file. And then the second service is actually a load balancer. And the load balancer is gonna do a couple of things. One, it's allowing us to specify this external IP here so that we can understand where people need to go to access this. And then secondly, it's exposing this port here, 1433. And this is the default port that SQL listens on. So I'm just making sure that when traffic goes there it's listening in on that same port that your SQL container is listening in on. So theoretically, you could leave the SQL server on-prem and have the container in Azure talk through it through the hybrid connector, right? You could, yeah, theoretically if you wanted to do that, for sure. So that's actually the extent of this SQL file. So not too much in here. I'm gonna take a brief look at the WCF file and just highlight the one difference in here. So it's basically the same as the SQL one but we've specified a strategy here. And this strategy is a rolling update strategy and all that's saying is really that we want to make sure new pods are created before the old ones go offline so that there's no break in your service. So that's really the main difference here. And you can go to the Kubernetes docs and find documentation on how to build these files and what this all means. Exactly, yeah. We specified a lot of that again in our wiki too as we take you through it but Kubernetes has more extensive documentation on it as well. We'll see someday in Visual Studio, right click and publish using Kubernetes where these things are just dialogue. Maybe, it's just like we have for Docker, hopefully. That would be great. Okay. Cool, so that's actually, those are the only two things. Maybe we have the add Docker support that writes the Docker file for me and the Docker compose file. Yep. This could be a logical next step. This would be nice. Okay. These are the only two files actually that I need that are now deploying into Azure. So I'm gonna do this through VSTS. You can do this through the command line if that's your preferred method. Whichever way you want to choose is fine but let's go into VSTS. So here in VSTS I've created a new release definition and this is really, first I'm adding an artifact which is just specifying where I'm getting my files and my code from. So all of my code is up in GitHub so I'm just accessing this repo where the code lies here and specifying there. And then here in this environment I've added three different tasks. And that's to deploy the SQL container, deploy the WCF container and refresh the WCF pod. Okay. So here's actually where I'm referencing those YAML files that we just created. So that looks a lot easier than having to learn the command line prompts to do this. Yeah, it depends. If you're familiar with Visual Studio Team Services I think it's easier in that sense but there can be a learning curve to just understanding there's a lot of different things in VSTS so whatever is easier for you. It's not that many commands as well in the command line so I think a lot of people start with that so that's easier. So here in the configuration vial I've specified that we're pointing at that SQL container deployment file that we just created and that was in the root directory. That's why I created it in that same folder that we were working in for our project. And then down here you also have to make sure that you had your images pushed to a container registry. So we had them up in a Docker registry. You could also use ACR. Yeah, exactly. So they have options for Azure Container Registry or Container Registry, whichever one you want to choose. WCF Container, same thing except you're referring to this WCF YAML file that we created. And then refreshing the WCF pod is pretty easy as well. And what does that do? That's really just making sure that the pod is refreshed so it's a step-by-step process, right? So you're going to deploy your SQL container and then you're going to deploy your WCF container but things kind of don't go in sync necessarily. So you want to make sure at the end that you refresh because your WCF pod is really relying on the data from your SQL container, you just want to refresh it at the end to make sure it's all kind of in sync together. So now I could go to the releases and when this pulls up we'll see that I was looking at this one here just now. And I could create a new release. And this can obviously be part of your usual flow, you make changes, you build, you release. Exactly. Or you could just use the release here because it's easier. Yep. Okay. Specifying that environment that I just defined with those three tasks. And then I can choose if I want the latest build from GitHub or which kind of version I want to choose here. And say create. So release 10 has been created. So if I go to release 10, it's not automatically deployed. So I'll just go here, press deploy. And it's just telling me that release 10 is the same as release nine because that's what I created this morning. Okay. And if I go to the logs, here's where you can see the step by step process. So this will actually take you through the entire process of those running through all those tasks that I just showed you. But we're not going to go through this right now. It takes time. It takes time. You gotta spin up Kubernetes. Exactly. You gotta copy the container, which as we talked about last time, they're pretty large. Yeah. So it'll take a little bit of time. So I've actually already done a deployment earlier today. And so we can go and take a look at that in my Kubernetes dashboard. So now you're now over in Kubernetes. Exactly. You're out of Azure, you're into Kubernetes. Yeah, and that's where I can really discover all my pods and deployments and services. And to pull that up, all you do is type kubectl proxy into your command line. And what do you have to do to get kubectl or however you wanna call it on your computer in the first place? You do have to install it before that. Okay. So if you type kubectl, you can find instructions on how to install that. It's like any other CLI that you have to install on your computer. Yep. But that'll allow you to access this IP here so that you can... Type kubectl proxy and that connects. That gives you the ability then to go into the browser and see the dashboard, right? So at some point, you have to log in, presumably? No, so you've actually, when you set up that whole first step, when you actually set up ACS and Kubernetes, you connect kubectl there to make sure that those two are connected. So you've already done the login there and that connection. So here because I've already logged in, it knows what dashboard I wanna access. Okay. So if I look here at my deployments, you can see that I had a SQL data for WCF and I have a eShop modernized WCF and those are this WCF service and the SQL server container like we talked about. And those are containers, those are deployments. So that was in deployments. So remember I talked about pods, which are the containers. So we have one pod for each of them right now because we've only just done one for each of them. And then we have services. And so here's actually where you can see the external endpoints. So this page is really useful. So actually for my WCF service, I can go visit this endpoint and I can say, hey, this is that catalog service that I was trying to reference before. You can see it's up and running. So that is now the WCF service running in Azure orchestrated by Kubernetes. Correct. All right, cool. Exactly. So I can take this IP, just like we did before, and head back to Visual Studio. And remember our app config file is where that endpoint address was for specifying where the service is. So you can plug in this new service and restart this WinForms application. And when this starts, this is actually gonna be the WinForms application talking to the WCF service and the SQL server that are running in the cloud. So we've gone from making it from local containers on your machine to actually running it in the cloud. Cool. Now I think we might have talked about this last time, but it's always worth revisiting. I've done similar demos of modernizing. You know, take a WinFormer WPF app, talking to WCF, talking to SQL server, and I've taken the SQL server database and migrated it to SQL Azure. Yes. Because that's SQL server running in Azure, which is awesome. And then I took the WCF service and I've published it to a web app, or slash website running in Azure. And then the client still sitting on the desktop is talking to the service, talking to the server, to the data, up in the cloud. Which is exactly the same thing we're doing here, although we've done it through containers or the orchestrators. So when do I do which? Because the first, the easier route is easier. The easier route is easier. I think, I mean, the key is dependencies. So a lot of the times you have dependencies that you need to kind of bake into running your application and the container is gonna host all of that for you, which makes it really easy. So you don't have to think about like, oh, is it updated? Do I have all the right things installed here? Things worked in development, but is it gonna work in production? All those worries about whether it's gonna work or not or whether you have all the right things installed is taking care of it with containers because that's like a full environment in itself. The other cool thing is scaling, right? So it's super easy to scale now and I can scale out these containers with just a couple clicks. So if I actually go back to the Kubernetes dashboard here and go back to my deployments and say I wanna scale out this WCF service, right now I only have that one pod running, but all I have to do is click, say scale and say, okay, now I want five pods. And you say, okay, and now you can see one out of five pods have been created. So if I go to my pods here, it's working on creating those other four pods. And that was super simple, right? So all I had to do was go to my Kubernetes dashboard and say, scale it out, easy. So I think that's a big benefit of using an orchestrator as well. Right, okay. So I think that's kind of a good answer to the question is, if you're just gonna stick it in a container and just run it, you know, you can do it either way. But if you're just gonna stick it in a container and just run it, you're really not necessarily taking advantage of all the stuff that containers does, right? When you get the orchestrators and have the flexibility to manage them and have failover and if you've got five pods running, if one goes down, there's four others running, you can not miss a beat or you could easily increase. So if you're doing like ticket sales or something, right? Tickets go on sale at 10 a.m. So you know that between 9.55 and 10.30, there's a gigantic spike in traffic. And that's when you sell the bulk of your tickets these days. And you don't want it to go down. It has to go up. You want to make sure, yeah. Exactly. That's a great situation. And then it's 10.30 when the concert's sold out, right? You can just scale it back to, you know, maybe to three because there's stragglers and by 11 o'clock scale it back to one. Yeah, really easily. Okay. Yeah. And it was just a couple of clicks and it does automatically for you, which is really nice. Cool. And then it's also really important if you have microservices like we were talking about before because then you have so many different things to manage and you can load balance. Kubernetes will take care of all the load balancing for you so that you don't have to. So it's running in Azure so obviously we're charging for Azure. Is Kubernetes charging for any of this? No, it's free. Open source. Okay. Kubernetes is actually pretty awesome like that. So it's from Google. Okay. And it's an open source project. Okay. Yes. Cool. All right. So Azure is the only thing that's charging you here. Right. Running there. Excellent. And that's actually about it. All right. We've taken you from your existing dotnet application running on-prem locally to adding containers running that locally and now we actually have it running in the cloud. Cool. And then the repo you showed the eShop containers, there's a ton of walk-throughs on how to do a lot of the stuff we will put people to those. Please do. Yeah, exactly. So the dotnet architecture repo on GitHub has this eShop on containers and this really has so many samples and walk-throughs on how to do this step by step. And if you want more detail, if you prefer eBooks, you can actually visit our architecture page on the dotnet site and this will tell you, give you eBooks that you can download, it'll refer you to those same samples and you can explore kind of microservices or modernizing dotnet apps is really what we talked about in this demo. Cool. Or other things. All right. Excellent. Awesome. So like I said earlier, we've done a fair amount of container stuff on the show. We'll probably put it aside for a while and go do some other stuff. But hopefully these last couple episodes, if you weren't that familiar with it before I've given you some good ideas on how to get started doing this, we've really again focused on the existing applications. The smart hotel 360 is cool, but it's kind of a new modern thing. You've got dotnet core services and Java services. And again, you've got existing web forms and WinForms and WPF apps and WCF services. Containers is also for you. Exactly. Thanks so much. Thank you. All right, we will see you next time on Visual Studio Toolbox.