 Welcome to Visual Studio Toolbox. I'm your host, Robert Green. Just kidding, he's on vacation. I'm your host, Brady Gaster today. With me, I have my friend, Mikkel from the Service Fabric team. I used to work with Mikkel when we were both in Visual Studio. He's one of those awesome people who really liked building the tools for the service, but he liked the service so much, he actually went over to the service. So if you want to learn anything about service fabric, Mikkel's probably a great guy to reach out to because he knows the tools and the service, and we'll put your Twitter handle in the show notes down below so we can get in touch with you. Sure. Real quick intro, what we're going to talk about today. We've talked about this, done quite a few shows on it. Back in November, we had this awesome conference in New York City called Connect. When we were at Connect, Corey Sanders did a great general session on application modernization with Azure. Mikkel was one of the guys that helped him put the presentation of the demo together, and we collaborated a little bit on it. What we wanted to do was to add this particular set of code to the demo set that we've got for the Smart Hotel 360 site. While we're talking about Smart Hotel, I want to go ahead and pull up the campaign page. It's out on azure.com. What we've done is we've finally gotten all the videos that you've seen us record out here all the way down here at the bottom, and we've got links to the different repositories, one of which is repository that Mikkel and I created, and here's a diagram of everything that we've got. We're actually going to be taking this show and putting this show down below. Maybe by the time this show comes out, we'll already have it out there. But if not, you'll see it show up there eventually. What's your role to get to this place? You are all to get to this. We've put it right here. I don't have it shortened yet, but you can see it's azure.microsoft.com, forward slash campaigns, forward slash Smart Hotel 360. So that's how you can get to this. While we're talking about this, I have to do a shameless plug. If you're going to be at the Dev Intersections Conference down in Orlando at the end of March, one thing I want to say is that we're actually going to be doing a workshop where we talk about how you can bring your existing.net investments to the Cloud. We're going to talk about things like AKS, containers, service fabric, everything. So if you're going to- App services, all good stuff. All good stuff. So if you're interested, we've done this once or twice. I can't remember how many times we've actually done this workshop. Maybe once together, but I've done it at least twice. He's done it quite a bit. He's done the session that we're going to talk about today a couple of times since then as well. So if you're going to be at Dev Intersections, we'd love to see at the workshop. If you're not going to be at the workshop, just stop by and introduce yourselves. Real quickly, we're going to do a hardware change here. So I'm going to go ahead and disconnect and reconnect with Mickle real quick and we're back. So what we're going to talk about real quick, it's the things that we're going to talk about what we've already talked about. That's your typical conclusion at the end of your presentation. The things that we've already talked about in the existing smart hotel stuff, we've gone through App Service and we've talked about various ways that you can use Azure Container Registry, Azure Kubernetes Service. We've talked about all those different things. But if you want to take it to the next step and you're doing something that you need high availability or some other capabilities, or if you just want to bring an existing .NET application, .NET framework application, you don't want to convert it to .NET Core yet. If you want to bring that up, that's when you might want to take a look at service fabric. Mickle knows a lot more about this than I do. I'm just trying to sound smart at this point. But really what this does is this sort of completes the smart hotel 360 puzzle because there were some apps that the people in the hotel wanted to bring into the cloud and not tweak. So that's where you picked up and ran with things. Exactly. So this is a really real scenario we see today with a lot of customers. They have existing .NET framework applications, typically applications that send around IAS. So anything from web forms, WCF services, those applications. It's a huge investment and those are like good line of business running applications today. There's done a lot of good for these customers. Kind of give us an uproar. Don't fix it. Exactly. But they might want to take these applications with them on to a cloud journey. So this is where this talk about modernizing the applications comes in. But it's not necessarily about just modernizing the application. There are other things you can start modernizing without going into the code itself. So it's a way that we can actually modernize the application without actually changing the code that we've already invested in written. So that will be an optimal situation. The closer you can get to that, the less intrusive to the actual code you have that migration is, the better, of course. Makes sense. Because then you just take all your existing investments and move it over. Exactly. So the progression that, and that's also how we lay that out in the talk that Cori did. There's a progression where you sort of, the investment you put in, you get more, you get benefits based on those. So I mean, if you want to get the full benefits of scalability and high availability and reliability in the cloud, being able to do microservices on top of your existing applications and stuff like that, you need to do some investment. But there are small investments you can start out with. Right. Like you can take your existing applications today, like those kind of applications, the .NET framework applications. So say you have a web forms app or WCF app. Exactly. Or you have those two working together and sort of like a three tier application, which is actually the sample we have here. Right. There's an immediate step to just move those into VMs in the cloud. Okay. Right. So you start moving from an on-prem data center into a cloud-based data center. Right. So that's like for the folks that they want to touch IAS, they want to low-level configure everything. Maybe you have some comp components that you're still calling into or some WCF services, and you want to have that like tight control. It's not a physical box that you can go touch but you can actually RDP into your VM in the cloud and do all the things that you're already familiar with. Exactly. You get all the infrastructure knobs for you to sort of stitch things together the way that you want to. Right. Either based on requirements or how things, compatibility that you can sort of get to. But also it's just like it's a small investment, it's a first step to sort of modernizing the way you just run your operations around your servers. Right. So lots of good stuff in Azure that helps you around that scenario. Then of course you can move on from there. Like going into VMs in Azure, it doesn't require a lot to code, you just basically redeploy. But then you can start doing containers instead. When containers are interesting is because then you start to be able to take the benefit of using orchestration or container orchestration. Okay. So for those of us who have basically taken two Docker images or two Docker containers and put them into Kubernetes together and they talk to each other just because it works. Yeah. Could you explain to us what you mean when you say orchestration? Like what's that for a layman? So the basics of orchestration is that your application and the way that they run and the way that you get reliability and scalability with your application is that there's an underlying set of VMs. We end up running and well, we actually end up running on physical hardware. Right. But we sort of always skip that point. Now it's just we end up on VMs. But those are machines that are as big as this room somewhere safe. Yeah. Exactly. And those are us to take. You know, those are up to us to handle like Microsoft has been a cloud provider, but there are a set of VMs in there somewhere. So with all these orchestration technologies like Kubernetes and ServiceFabric and others out there, basically they you bring a pool of machines as your sort of the compute power that is behind whatever you want to run. Okay. And because you have these workloads well-defined in either standard like container format. You see I your container definition. I think it's called where a dagger is one of the implementation of those. You are able to easily move these workloads around across that compute sort of platform you bring together with this set of VMs. Okay. So really what the orchestrator does is it gives you availability and resiliency in the case that any given node goes down or has to be taken down because of maintenance and stuff like that. It's very easy for the orchestrator to move that workload to the service of the container to a different node. Okay. And it just does that for you. Makes sense. Okay. So comparing to the IS scenario where you had like maybe two web servers that you have to run. Right. And you probably run both hot like you just load balance your load across them. So if one goes down, the other have to serve everything. Yeah. In this other scenario, you can actually just run one. And even when something happened, the orchestrator will make sure that another instance of that application comes up somewhere else where it's actually able to run it. Okay. And the benefit of that like those orchestration engines that exist Kubernetes or service traffic or what have you, those exist so that you don't have to do that yourself. You don't have to worry about how all that stuff works. You're just left to make your app and put your app in the orchestrator and let it do its thing. Exactly. So that's a good thing. Yeah. Cool. Awesome. Well, we've talked a little bit about this stuff. Let's see how you could do, let's say how you could scale some stuff out using what we've already built up. Sure. Yeah. So let me start out by showing how the Visual Studio Tools really helps you make this really, really easy for you. Yes. Of course. So what I've done is I have, we have an application sample and that is actually part of a GitHub repo. So let's just start with the GitHub repo. Cool. So you can see we have here the on GitHub we have in the Microsoft organization, we have the smart hotel 360 dash internal dash booking that apps. You can reach it from the main repo for the whole smart hotel 360 solution. So basically this repository, the story that we're trying to tell the context we're putting this technology into here is, you have this, let's call it a front desk booking system in your hotel. It's basically where you go and you check in people, you check them out and stuff like that, which is an application that's probably running on a few servers behind someone's desk, or maybe even moving to a real data set I'm up on in time. Yeah. But you want to move that into the Cloud and you want to start getting some of the benefits that we just discussed. Got it. So we just, in the repo we have this readme doc that sort of describes the application how it works, but it's a very typical scenario, I think a lot of you would recognize. Those are web forms with frontend to basically render the HTML for the application, the booking application. There's a WCF service, which will feed the frontend with all the data that you need and do other stuff, and then it's backed by SQL database. So very typical like three tiers scenario. Right. A lot of people will see. So the main tier for those of us that believed in more than three? Sure. You can add more if you want to. That's true. Yeah. And that's when we get into microservices and it's end of the end tier. Yeah. Tears and microservices, but that's a different discussion. That's a different blogging. Yeah. Anyways. Sorry. People probably don't consider that tiers. I don't know. Anyways. At that point, you've got one tier that is your microservice layer. Yeah. I would sort of think of it like that. But even though the services themselves can have tiers, but that's a different conversation. Let's not go there now. It's more theoretical. Okay. So the diagram I just showed you, how does actually look in Visual Studio. So this is my solution. This is a proper solution that was made like a few years back and it runs well today. I can go on debug and an iOS express locally. As I would today, I have local SQL DB and all those good stuff. Now, Visual Studio has built in great tools to help you start getting these applications into containers and with the goal of running in orchestrators either in the Cloud or somewhere else. The very simple thing we can do for a service fabric scenario is I have my web front-end, and I can simply just right-click on the projects in my solution, and I can choose to add Docker support for service fabric. So what the tooling will do for me was actually add the required files, the definition files I need to describe how this is going to end up running in the orchestrator, the files that is needed to describe how this is going to be packed as a container. So these simple tooling tricks just get you started very, very quickly with those things. Now, is that a tool that's built into the one I was actually just in the VS installer this morning? Yeah. Is that in the VS installer? Is that the Azure workload or is this a second extension that I have to pull down from? It's part of the Azure workload. Cool. Okay. So if you just click on Azure, you should be okay. You should be okay. The exact version I'm showing you here is a preview of a component in there. Okay. So you need to find the service fabric tooling preview. It's out on the block to be able to get this. Okay. Which gives you this specific feature that I'm showing you right now. Cool. Awesome. So what actually happened to my project here is I need this doger file for my specific application. So this is my web forms application here. So this doger file basically describes how to wrap this web forms app inside a container. So there's a base image, there's a Microsoft ASP net base image that I'm going to use. That's a doger image that Microsoft provides. Cool. So these doger image typically like alleles on top of each other. Right. Right. And you can go back and find from doger hop, there's good documentation. You can actually go and see the repository, and you can see the doger file that builds that image. You can sort of trace back all the stuff that end up in your image that way. That makes sense. And really what that is, I mean I talked to you about this and Shane Boyer about this at one point and I said, well, how do I know what's going to be on that container? Well, it doesn't matter. That's like the baseline for you to get started. I was like, well, what's in there? He goes, think about it like this. When you do .NET new in a razor, that's a template. Somebody else wrote that, it's going to get you started. This is really just a docker image that gets you started. Yes. So you don't have to bootstrap everything from the ground up. Yeah. Makes sense? Yeah. Okay. It makes sense. And you can see from the, there's actually only one thing we're doing here. So the arg of the work directory we're going to put in are basically just parameters being used when we built the container, but the only thing we're actually doing is we're just covering our published output into the container. Got it. That's all we have to do to actually publish this web app. So you just take all the DLLs and the files that VS outputs and you just push those over to the container and now you've got everything in a box. Yep. Cool. I like it. That's it. That's cool. So Visual Studio gives you that and then it gives you what we call service and application manifest as well. Okay. So those are the files that you would need to hand over to service fabric. So service fabric know how to run this. Got it. Like it described. So this container, once you go ahead and build it, you have to do the container dance. We're putting it into a registry or repository. Right. No, it's a container registry. Sorry. Container registry. Container registry. And in the registry, there are repositories. Yes. This gets interesting. So you would use the thing like Azure Container Registry, which gives you a great feature set for having private registries, securing your containers, even integrating with security compliance tooling. So whenever you build containers have repositories in there, you can be warned about vulnerabilities to containers and stuff like that. So there's a lot of great features around there to help you manage the security compliance around your containers when you start building them. And then you could use team services to pull the images out of the registry and put them into some sort of an orchestration. Yeah. So you would say like if you have a typical setup with some continuous integration or build pipeline, deployment pipeline, typically you would take your code, you will build it and you will sort of dump your artifacts somewhere, which you will then go and publish. It's basically the same fact that we dump the container as an artifact into the container registry. Got it. And that's where we go and pull it from when we have to deploy it. Makes sense. So you have to have somewhere to hold the images before you deploy the images. Yeah. I think there's an old ITIL term called DSL, like a definitive software library. So that's basically, that's where you can go around your stuff. That's cool. That's the same thing. That makes sense. That's good. Different words. Whatever. So, but Mr. Studio just gives you all this. And you're pretty well on the way here. And once I've done it, I need to go and do this with my WCAF application as well. But once I've actually done this, I'm able to go ahead and I can debug things locally, see them running in containers, and all those kind of things. So really what you're doing is you're containerizing your web app, and then you're containerizing your WCF app. Your WCF app probably has a different base image. Then you take both of those containers, put them into, I'm going to use abbreviations, ACR, have a container registry. And then you would use some sort of a CI CD to take them out of there and then deploy them into a fabric. Yes. Cool. Correct. Makes sense. So if I just move along to a different solution we're actually set this up. Now I have containerized both of these applications. And the stuff that you typically do then once you move things into containers is we really want to make our containers immutable. So the idea is once I have a container, it doesn't change. Right. It's the build output that I created. But typically when we go and run a web application, there might be configuration and stuff like that. We need to parent. There's like typically connection string to a SQL database, blah, blah, blah. Tons of stuff. Yeah, there are different ways that we can do this. You know, there's web transform config that we're going to do. What's a little bit problematic with web transform in this scenario is that it actually generates the configuration on build, right? And you actually end up sort of burning that into your container by following this specific process here. But there are ways you can build your container in a way that you actually get that configuration or run your transform when you run the container instead of doing it on build. In a sense, you're parameterizing the stuff that you're getting ready to put into the container. Yes. So once the container startup, it doesn't just start up your website. It's actually doing the web transform config based on the parameter it gets input. And then it starts up the application. So that helps you avoid hard coding connection strings and putting them in the container. Yeah, and that's the way you can use things like config transformation, like web transform config. But I mean, so if you're very heavily dependent on that, that's definitely a route you can explore. It's getting a little bit more complicated in terms of how you build your container. But the way that we usually do configuration in containers or the industry sort of gathered around that we do configuration is to environment variables. So whenever you go and run a container, it picks up a set of environment variables, which is basically the configuration it would run with. So either that or through files that we can inject into the container is something we call volume mounting. Volume mounting, yeah. Yeah, so when a container starts up, it would expect a whatever drive being there. And that drive, we can actually map to something external that's not inside the container. So those are basically the means of getting configuration. And my understanding of the idea of volume mounting with Docker in containers is like each container has everything it needs to run. But let's say it needs to store stuff that's bigger than you want to get that container, that image to be. You would say, well, I'm going to volume out C or C to a directory or D or you have some other drive. And that way the individual Docker containers, the data is not being stored there. It's being stored on disk so that if you were to destroy that container, you don't destroy the data that goes along with it. Yeah, that's the rule, okay. Cool, yeah. But then when you're in this orchestration world, you need to figure out how do you get that data across the different nodes that you have. Okay, okay. So now you get into other, I mean there's always stuff. Yeah, yeah, yeah, yeah. So, but the way that we end up, so the way that we can do this in service fabbing, and I'm going to show you some of the configuration stuff that you, this is basically the definition file you hand over to the orchestrator. I mean, for in a Kubernetes world, there's stuff like the Kubernetes YAML files. There's also Helm charts you can use if you use Helm and Tiller. In the service fabric world, there are application manifest file. Docker has a concept called Docker compose, which used to work with, it's created for swarm, Docker swarm, sort of orchestration technology to Docker create. In service fabric, it's called manifest, it's an XML file. So you can see I can start parameterizing a lot of stuff in here, like stuff like the ports that my, my WCF has an endpoint. I can parameterize the port if I want to do that. You know, I can do different parameterizations, like there's a WCF service URI that my front end needs to have. Now I can put that in through configuration and everything here. So none of these things are sort of burned into the container image now, but I hand those in as configurations. Got it, so I see here that you've got, in this resource overrides, you've got endpoints, and then you've got endpoint names, smart hotel registration, WCF type endpoint, or, and then you've got port. And this bracket, I presume what that means is it's going to look at this parameter in the file. Correct. Got it. And you can see I have a default setting there. And what I'm also able to do is, then I'm able to have a set of parameter files. So depending on the environment I go into, I can start overriding these things. You can actually see my connection string and all of those things. I have specific parameters of files so across like staging and production, like that. I just feed another parameter file into this definition, and the container ends up getting that in as the configuration has actually been running with. Does this help? I mean, this is obviously for various environments, but do, like let's say you and I are working on a project together, is it, do different devs have different files? I mean, is that kind of? They could. They could. It's not that often we see this. So the cloud, the cloud is the one that sort of, That's the one. We just call it cloud because that's what tooling gives you as a default. Yeah, when you run into something like remote, then we have the local stuff. That's because you can run service fabric locally on your machine. Got it. And we can emulate multiple nodes and stuff like that. So there might be some specific configurations you have to do when you run things locally in multiple nodes. Like you can't have five instances of a service using the same port on your local machine. That's not going to work. So, you know, you need to, you need to handle some of those stuff. Got it. That makes sense. One other thing I want to point out into the manifest is you start seeing some of the stuff that you can do with these orchestration. Like we have this thing we call resource governance policy, for instance. Now, when you start, you know, running these multiple containers inside an orchestrator, like you don't want to have them run freely in terms of, you know, memory and CPU that's actually available to you. You could do that, but that could, you know, end up a bit messy at one point. So what I'm doing here is I'm actually putting some governance policies around my containers. So I say these containers, they get a gigabyte of memory and they get one CPU core. Cool. What would they get when they run inside? And I can, of course, adjust that, right? So in this case, I think I'm doing the same for both of my containers, but I could say like the front end needs more, the back end needs less, or the WCF service needs. So, let's say I wanted to run SQL server inside of a container, because that's one of our new awesome features that we can do. I think when I've seen this in the past, it says that I have to have four gigabytes of memory or four cores. I can't remember if it's for something or other. Effectively, would I go in and change this based on the requirements of that particular container and what it needs? Yeah. So, I mean, if you don't specify anything, the container gets to run freely. Use whatever you want to use, right? Okay. What I've seen is that these applications that were typically built to host in an IAS server, like they sort of assume that I have what I have and there was a lot like this. Let's just kill it with iron if we had an issue, right? Yeah. But maybe we want to control a little bit more. He keeps. You can do those kind of things. So let's see how this thing actually ends up running. So here, this is just the application. I just want to show you that it actually is running and this is where I go and check in people and if things are still up and running, that's good. You can see. So very basic, there really isn't anything interesting to the application itself. But what I'll show you is the management view once this is in the orchestrator. So this is service fabric we have now. And you can see this cluster that I'm connected into. It has this concept of nodes and applications and services that I have deployed. So basically, I have a cluster consisting of five nodes. Which meaning five VMs, right? I have this one application deployed and that one application is my application consisting of my web forms and my WCF application. So you can see stuff like, you can actually see stuff like the allocation I've done in terms of CPU and memory across my cluster. So you can go in and see like, I have these two containers running now, right? And you can see on one of the nodes, there's one core reserve and another node, there's another core reserve. So if I were to deploy more containers in here, you can see how they sort of lay out and the capacity I have left. So not only is the orchestrator using that thing to reserve or assign resources, but it's also using it to make sure that the load is balanced across your container. That's cool. What's your cluster? Did you click on cluster map? Yeah, I'm kind of curious. And I can do the same with memory here, so. Yeah, cluster map. Then we get into the whole orchestration thing. This is like, this service fabric cluster of five nodes because what these orchestrators are really about is like resiliency. You really want to run your, make sure that we run your work no matter what happened. So concept, service fabric works within that sense is what we call fault and upgrade domains. So basically, consider the five VMs we have. We sort of, we put them, each of them, we distribute them across the fault domains and the upgrade domains. Now fault domains are really tied into the underlying infrastructure in Azure. So imagine that Azure is this data center with tons of racks and there's like, power supplies coming in and network connection coming in and stuff like that. So it's actually physically laid out in the sense that if a failure happens over here, like a physical failure of a broken thing, it doesn't affect other areas of the data center. So if somebody trips over the power cord on the east side of the data center, you've got your stuff over on the west side, you've got stuff still online. Exactly. That's awesome. So because we know how Azure works, when you run these clusters in Azure, we make sure your VMs don't fall into these fault domains. So we actually make sure that we distribute your VMs across those failures that could happen. Oh, how very polite of you. Yes, you're welcome. And we have the same concept about upgrade domains. So upgrade domains we use because whenever we do a change to the cluster, if you throw a new version in or you have to change the configuration of the cluster and stuff like that, we do what we call rolling upgrades, meaning that we only upgrade one domain at a time. So in this scenario, like let's say this VM number zero have up here, like if I were to change a cluster configuration somehow, I need to pull out change to upgrade the cluster itself. What will happen is that we will go ahead and say, okay, let's do upgrade number zero. There's this one server in here. We find all the services, containers running on that server and we tell them, hey, you need to find another place to live because I'm going to shut down this VM. So the containers start moving over to other servers. Once they're gone, we're going to upgrade that VM. And once we're done, we're going to see, is everything still, there's a wait period, there's something like a cool down. Take like, let's say for the next 15 minutes, is everything still okay? Are the services running? If we bring containers back, will they come up after we've done the upgrade? And when everything's good, we move on to the next update. Wow, that's great. So it's really, really, really, from its very core, it's built to just make sure things keep running. We should emulate that at the workshop. Oh, we can totally show you that. That'd be fun. We can even show you some of that now, right? Yeah, let me actually show you that now and now. Because my two containers ended up in what we call an application over here. So you can see a ham, a registration, a ham, a WCF. Now, what I told my cluster is that I only want one of these containers running at a time. Because I don't need more, I don't have the load for more, anything like that. But I can easily have it run multiple. Now, I'm going to do this as a manual operation, but I could have done this in any automated way, wherever we were to monitor requests or resource consumption and stuff like that. So what you can actually see now is running a node number zero, but now I've got two more containers I've been running. Awesome. I'm just lit up and the other one's going to light up here, right? Yeah. So this one is just coming up right now, right? So you can see in our three instances, and they have names and they run across these different nodes. So I actually just scaled out that webforms application service that I was running. So if you go back and think about you were to do that in your own data center today, I have this IAS server, I need two more. Well, let's go and phone the hardware window, let's you know blah, blah, blah. Yeah. So this is, you know, these are the things you get, you get this thing from you. This reminds me of a stat, it's an old stat. I'd love to see what it looks like today. I've got numbers going down because, you know, cloud adoption. But I think it was 2012 and 2013, there was a study done, survey done of companies in the UK and the US to figure out how much money was spent across the, what happens in a data center and where the money goes. 72% of the money was spent on maintaining servers. And if you think about, you just did this, you didn't call anybody. That's great, that's fantastic. So that was, you know, that was scaling things out. Now we can try another trick, we can try to, you know, let's bring down one of the notes. So let's just for the sake of, you know, of the demo, let's just try to deactivate this and remove the data. Like it basically just means in the language of service fabric, don't necessarily expect this note to come down anytime soon or don't ever expect it to come back, right? That's what we're gonna do. So I'm gonna disable this guy and you can see now my service that was running here and node number zero is now gone because that node is gone. So it actually signaled this to, hey, we need to bring on, because you asked me to run three. So I'm gonna run one on node number three instead. That's pretty awesome. That's what we're gonna do now. So you literally just simulated something going down and it taking care of it and giving you the third back again. That's pretty cool. And that took nothing, it was effortless. Yeah, that's really awesome. So those are really the benefits in getting into this orchestration world. You see how the whole aberrations, upgrades and all these kinds of things just requires very less, a lot less involvement for a lot of people to be able to do that. I mean, it doesn't, I mean, you still need to do your testing and blah, blah, blah. All of this. You need to do things right, but it just, I think the main point is it requires less involvement for a lot of people, less effort to get to that point. That's really, really great. Cool. I've got more questions now, but we have to end the show at some point. The only other question I can think of off top ahead is I know you've got service fabric tools. Yeah. What's the deployment experience look like? Now I've got my ASP.NET app, I've got my WCF app, I'm all ready to go. Like how would I push it up to service fabric? So VSTS has a lot of great built in, both built in deployment templates. Because friends don't let friends right click deploy. Exactly. You know, we don't let you jump the gap and then later we have to go and close it. We just take you in a nice path to get up there. So in Visio Studio Team Services, we actually spend some effort to, there are specific tasks that maps into the service fabric world and there are specific tasks that map into the Docker world. So the things you have to do to, get from just, committing whatever code I showed you over here into repository and start up the build, creating the Docker containers, get them into the container registry. And then from there on, taking those manifest files, go and find the service fabric cluster, even build the service fabric cluster because it's just ARM. But handing these things over to service fabric and tell it to go and deploy those containers. There are specific tasks for those in VSTS and there are even templates that you can just work with. That's cool. Some of the stuff that you showed here in the portal. Yep. Just a quick question, is like scaling that guy up to three, is that the kind of thing that you can actually do from an Azure CLI script? Like do you have, you do, okay. Yeah, PowerShell, we have PowerShell, we have CLI to do all of these things, REST APIs, so all of that is enabled. REST APIs meaning I could write my own thing if I was feeling brave. Yeah, so once you move beyond this, because what we really just did is we took old code or legacy or heritage code, we can even call it, you know. But we moved it in. Existing code. Existing code. We moved it into this wonderful modern world where we get all these benefits. But what you probably want to do now is like it enables you to easily add new functionality not by expanding the existing code base, but trying to get into this microservice pattern and having things a little bit more decoupled. So I actually have a scenario where we've done this with this application as well. Oh cool, let's see that. So if we just, let's just start back in the GitHub repo to just show you the idea. So the idea is we built, we ended out building this fairly complex setup here, right? So we had the existing application in web forms, WCF and SQL database. Now just imagine this for a while that some marketing external vendor bureau was called in. Let's give me a nice web page. Let's do some Twitter sentiment analysis. And these guys just, they're just a cloud thing. They pull together functions and customers DB and cognitive services, maybe even using logic apps. And they made this great web page that could show this Twitter sentiment stuff. But we sort of figured out, we want to use that as well in our application, right? Because we want in this back end system, we actually want to monitor for negative sentiment and see if you know, just get a nice overview of those kind of things and see if there's stuff we can do with that. And by the way, the sentiment analysis is another demo that we already have out there as well that you could party on. And is that actually what you guys hooked into? It's like, well, they're building that over there. Let's just hook into it kind of idea. I think it was so easy so we made it on ourselves. Yeah, it was. You can almost do it just by using logic apps. You can totally do it. That's how we did it. Yeah, correct. So it's not like, it's not rocket science at this point in time. So it's, but it's very, so the way that we hook these things together, it's like, we need a way to get that into our, you know, existing application up here. So what we did is we created a new microservices, just a small like integration services we call it. So we did a .NET core web API, which then goes and read stuff of the Cosmos DB, which is where these analyzed tweets are. And then we hooked that into the web front end. So because I'm in a service fabric world and the service fabric really helps you make it easy with this concept of building microservices. This is actually a progression of the same solution that I had before. But what I did is now add what we call a reliable service in your service fabric microservice. And this is just a .NET core web API. There's a little bit of boost-strapping code that you would have to do to sort of, so this is another way of running services in service fabric. Now we run a container, we're sort of from a service fabric point of view, the orchestrated point, we're sort of blind to what's going on inside. We know there's a container and we use the Docker host to help us, you know, run these, but we really control the lifecycle and everything. But there's another concept in service fabric called reliable services. Where you don't run in a container, you actually run in a process, but the runtime is able to hook into your actual code and work with your code. And on the other side, you can from within your code work with the runtime and everything. So whereas with the first set of code, you're putting it in a container and putting the container in a service fabric, you're not necessarily reaping any of the benefits of the service fabric itself. If you go this way, you can actually make use of some of that stuff. You can actually in your logic in the application that you built in that logic, you can sort of hook it, you can make the application where there's running an orchestrator and you can sort of, you know, use that to control lifecycle and things like that. That's cool. A very specific scenario there. So first of all, it requires you to derive from our base class, in this case, something we call a stateless service. So what we're going to do here is, you know, the web host builder, which is what you need in NEA's peanut core setup, we basically wrap that in what we call, you know, this method that we have that you go overwrite, which is the method that gives you an endpoint. That's how we sort of made that integration. Main point is, all of this code is being, you know, served to you by our templates. And usually, I mean, you can do stuff around whether you want to use Kessler, ACDPs and all of that. It's all again, it's still just ASP.NET Core. But usually you just want to head out to your controllers and start doing whatever you want to do in your controllers. And these controllers are just ASP.NET Core controllers. There's nothing fancy about this. But what you could do is like, you know, we had that scenario before when a server needs to be taken down and ask services to move off. Yep. Now in this world, you can just overwrite a method when you get these events. So you can actually from within your code react to those events. So you told me to shut off. I'm going to go do something now. Yeah. You told me to shut off. I'm just going to let me drain my clients. Let me send them somewhere else. Once I'm done, I'll let you know and then you can move on. That's pretty cool. So you get those sort of hooks into the orchestration and your applications can start being built around those kinds of things. Wow, that's cool, that's cool. So, but all this is, it's basically just a simple API we build now and we go and hook up with DocumentDB. So this web API, we deploy as a service and then we did a simple integration from the web front and we just put some, we just put a web form up which basically goes and called the API. Got it. Right. So the code that we added here is, you know, we use a proxy that's in the cluster because as things move around, we don't have to deal with finding them. Right. So we ask the proxy to go and find it for us. Got it. So basically we sell the proxy, go and find my web API and send this HTTP request to the web API. And that's, that reminds me of a conversation I had with Steve Lasker at one point. He said, well, you've got, we had an app service reaching out and talking to a bunch of containerized microservices. Yeah. And he said, well, you're using the full DNS name, why aren't you just putting the web app into the cluster? And then it can just say, I want web server or SQL server or you know, whatever. Yeah, exactly. Because that's going to be an app service. He said, oh, that makes sense. So you need the fully qualified DNS name. But once you're in, once you're all in a cluster, you just need to know, like I want to go to web 2 or I want to go to service A or whatever. Exactly. There is this URI that specifically takes me to my, you know, so I just created an HTTP request from this to the other service, but I let the proxy figure out where it actually is because it can have any port and it can live on any node. Got it. I don't know and I don't care. Right. Because it's in the cluster. Exactly. Right. It's awesome. So, so if we go back and this is another cluster I have I actually deployed this, you can, you can basically see, you know, the same representation of what was in the solution and I have my three services. Right. So, I just did an up, I could potentially just have upgraded the other application, I haven't done this, this is a different deployment and then added that extra service in and now, you know, I have all these working together and that service that I added have all the same, you know, benefits of being able to scale and fail over and all these kind of things. But what I love about this is that you didn't actually go into your original code that you'd already brought up into the cloud because effectively that code was finished. Yeah. You just want to add another feature to it but when you add that feature you decide that you want to use cognitive services or functions or Bing Maps API or whatever the heck it is, you can actually just bring in a different Azure resource and make a call to it rather than gut your code and like change everything around. Exactly. That's great. Yeah. So that first investment is easy and then you can extend your application as you need using all the more modern stuff that you didn't have when you first built your app however many years back. Yeah. And it really depends on, you know, the investment that, you know, the app sort of can carry in terms of, do you want to start breaking this one up? Because I would just start broken it up and taking functionality out of the old one, you know, wrapped in new microservices or I could just do this way like if I need a new service I would rather go down that path because it gives me more flexibility. That's cool. And the flexibility is mainly around, you know, scaling and upgrading, scaling independently. Now, if this was a high throughput system it probably isn't. But if a lot of people wanted to talk to the APIs the API that gives you the sentiment results you could just scale that one API. You don't have to scale the whole stack. Got it. Or if for whatever reason there's some news about Smart Hotel and everybody starts tweeting then you could just scale out that one particular service you don't have to scale the rest of the app out because that's the only thing that's going to be spinning. Yeah. That makes sense. That's cool. So, what's that? I mean, yeah, basically we just, you know, we added this extra reform and that should be able to give us, you know, so you can see Corey. Dolly, he doesn't really like Smart Hotel. For some reason. It's rubbish and messy, but there you go. I think, I think Corey is just happily sarcastic most of the time. I think so. I think so. I think that's it. Cool. He's great. This is great. One thing I want to ask you, you had mentioned like you talked to some customers who like had this kind of a style of situation. Like, like, tell me some of those scenarios that you've run into. I mean, you can't talk about customers, but tell me some scenarios that you've kind of hit. Well, the scenario is typically like people really want to take those first benefits like really, you know, either they want to get started on just a cloud journey or migration and all they, you know, we have customers like I got to get out of this data center. Right. I need to shut down this data center. Where do I put my workloads? Right. And this is definitely a path they can then take, right? And we have a few customers who work with today who's starting these migrations now and really trying to scale them, you know, sort of setting up a little app migration factory. Right. Which is doing all the containerization. It's cool. There's typically a little, little small things you have to do around, you know, we talked about configuration, those kind of things. There's small tweaks in your authentication, you know, you just make sure that things work, you know, get your fleet of test people set up and everything. And then you just start, you know, pumping these applications through containerizing and they all end up in this cluster orchestration world, which gives them all these, you know, operational benefits. Thanks. So a lot of customers starting to doing that right now, taking those first steps in, and you know, but definitely the motivation is also that they're in this cloud landscape, right? Right. So whenever they need to do things now, they have other options. It's so much easier for them to reach out to these, like, you know, more cloud-native patterns and doing things. That's great. And since you guys support containers and service fabric, you could do Python, it doesn't matter what technology is, you could just bring it, right? We can, yeah. The clusters I've showed you know, this is .NET Framework, right? So it has to be Windows. Right. Which means it has to be Windows containers, it has to be Windows hosts that runs the cluster. But if it was .NET Core, you could put it on anything. Exactly. And service fabric could run Windows, Linux, it runs anything on anything basically. So, but yeah, you can still, I mean, we have some great scenarios where customers are moving Java applications into Windows clusters even because they come from a Windows world already. So, yeah. So they're taking their existing Java code that runs on Windows servers, bringing them up and putting them into Windows containers and putting those containers in the service fabric and basically just moving the world. Yeah. That's awesome. Yeah. That's pretty awesome. Cool, cool. Well, this has been great, it's been fantastic. I think we've talked about service fabric maybe a dozen times. And I mean, no offense to the other 11 times you talked about it. This was the best conversation we've had. Maybe you finally got it in there. You got it. It's been hard for me to wrap my mind around it, but now it makes a lot more sense. Yeah, I think there's a lot more to that world, right? People building new microservice applications. We even have this thing called Active Framework that people talk about now, which is just the different way of programming these scalable applications. But there's a lot more to it than this, but the base orchestration capabilities, and this is definitely a great scenario to showcase those institutions. This is cool, this is cool. Well, this is the next chapter in the whole smart hotel saga. Sorry, Corey. Go out and watch Corey's video. We're going to have that in the show notes down below. Follow Mickle on Twitter. We're going to have this Twitter handle on the thing below. If you're not following me, I understand. But it's been great. I am Robert Green, I mean, Brady Gaster. Thanks a lot, Robert, for letting me host your show this week, and thanks a lot, Mickle, for coming out and doing this. It's been having me. It's been wonderful. Cool. Thanks a lot, guys. Take care.