 All right. So my name is Shane Boyer, and I am a PM on the Cloud Advocate team. Today, we're going to be talking about building Cloud-native apps with.NET Core 3.0 and Kubernetes. So a couple of things that we're going to go through here is, first, we're going to talk about some health checks, address the changes and updates that we've made in the Docker images, talk about that new template that Glenn started out with. He did a great job, but we're going to jump more into building some worker services, some endpoint routing, and then we'll talk about some of the things that we can do with Kubernetes and also the configuration system that we have built into ASP.NET Core and.NET Core and what it offers there. So first, let's talk about health checks. Health checks is that endpoint that we have built into our system here in ASP.NET Core. I know for a long time in my past life, at work, I've done a lot of coding around building an endpoint that we used to call Watchdog Services, where I would just build an endpoint on my services that will return some sort of a response that, in turn, I would probably have some sort of a service that would hit it and say that it was okay. So now on ASP.NET Core, we've got a new way to create an endpoint that's just a simple add. So now we're just referencing a new NuGet package, that's ASP.NET Core diagnostics under health checks, and just by adding in this new services, middleware services.add health checks, and then now adding this new endpoints.map health checks. Then in this specific example, I'm just adding a slash health and we can make that anything we'd like, but for me it makes sense to add slash health. By running that, now I can go ahead and hit that slash health endpoint, and it returns obviously a response if I'm healthy, and it'll say okay or return that 200. Now we can expand upon that and also add in my transitions when I keep going, add database checks as well. So if I want to make sure that my app also has database connectivity, I can also set up connections to my database backends. If my service is not connecting to the database properly, then we'll return that unhealthy response as well. This is really helpful when you're running into microservice development and you're deploying onto your cluster or into your containers, and you want to make sure that when you're doing these rolling updates that your service starts up correctly, and you're hitting your health points that they're up and running. So just by adding that.add database checks is available now, so in these diagnostics as well. So moving on to our Docker images. Now, containers is really at the core of building our microservices. In some of the improvements that we've done in .NET Core 3.0 is really important when it comes to building, doing that interloop development, and making these more efficient as we're building out our services. As a part of that, making them smaller is very important. So when we look at our Debian-based images previously under 2.2, there were 261 Megs, which is a decent size, but now we've reduced that down to 207. In an Alpine images, you'll see that we've got 166 down to 106, and that's a significant reduction in size when it comes to pulling those images and also scaling out. Then that's just the SDK. When you're looking at the actual runtime here, we're down to 88 Megs on an Alpine image, which is really nice when it comes to being able to pull those images and scale those out on cluster when we're having to expand for those spikes in traffic. Now, Glenn mentioned in the keynote briefly about building out the new worker service. Now, a worker service is a new template that we have available to us in.NET Core 3.0 that allows us to build those types of services that don't necessarily rely on an HTTP request. These services are much like that Windows service that you would create has long running processes, that just needs to continually run and do work without having to rely on some sort of a request. These are also services that we can expand to do work similar to a system daemon service on Linux with simple support. Really easy to get started with this. It works on a generic host, and a generic host is much like our web host, except it does not rely on anything that's web specific. The web host builder is built to pre-run in Kestrel. This here is just to host any generic service that's built on the I-host builder. In this specific example, here we just have a worker service that's going to add a host of service and be able to run. In order to expand that to run a Windows service, we're just going to add a NuGet package very simply using the hosting that Windows service NuGet package, and then add one simple line called use Windows service. Similarly, if we want to run a SystemD service on Linux, it's the same example for us to make that change. We add the Microsoft extension hosting.SystemD NuGet package, and then this add.U SystemD, and then all of the other nice features of.NET Core is available to us such as dependency injection, and all of the other features of our middleware are available to us here. So let's see what that looks like. I'm going to hop out to my Ubuntu WSL, and very simply, we'll just use our.NET Nu, we'll say worker, and we'll make an out directory of worker. Here we're just using the new template. It's going to create a worker service, and we'll see what that looks like in VS Code. So we'll open this up on VS Code to see how this looks like. It looks like it's going to unpack a couple of things for us on our VS Code server, which means we got a quick update. So what this is going to show here is just the template, we'll open up our worker real quick. This is going to look like many of the other templates that we've had in.NET Core 3.0. Again, it just works on the generic host. By default, we're going to have our logger, it's an iLogger, and we'll execute async with the cancellation. In this particular, the template here, it's just going to wait for a cancellation token, or cancellation request to come in before it shuts down the service. In our program CS, this here uses create host builder does the build run, we don't want to do that right now, and it's just going to run our worker service. So we go look at worker real quick. What's this doing is basically is every second, we're just going to write out to the console that we're running at a specific time. So if I go back out to our command line here in Ubuntu, let's do our simple.NET Run. Actually, I have to go into my worker, let's clear that and do.NET Run. So this should run just like any of our other.NET applications, and we should the expectation should be that it's just actually going to print out that we're running here. You'll see that we're workers running, which is really great. Now, we could add those NuGet packages and either set this up as a Linux service or as our Windows service. But we should in the interest of staying with our theme of building our services that would run on Kubernetes, I thought it'd be nice for us to put this in a Docker container, because I want to run this on my cluster maybe to do some other work later on. So what I'm going to do is, I'm going to add a Docker container here, and we'll pick .NET Core, we'll run it on Linux, and sure why not. So now that we have our Docker container in here, it'll just produce our worker here. I'm going to use the power of the Cloud to build this, and I'm going to go ahead and use my Azure container registry to go ahead and build this out for me. So I'll do this build, and I'll go ahead and tag this as my worker service. If I can spell here, service, one, and we'll tag it latest. I need to pass my registry and the context. So what we'll do is we'll actually take our worker service and we'll send it up to, make sure my Docker file is in there. We're going to add my Docker file. Let's just do it this way. We'll build our image in worker, doesn't seem to find my worker file there. All right. So we did this earlier, we're going to go ahead and go up into our worker. So we've got one built here. So we've got a latest, and we can actually run an instance of this. We can run an instance in our container. Let's say this is worker two, run it in Linux here, and we can deploy this container over to our an actual container instance. So this run an actual ACI instance of the actual worker and while that's deploying, we'll go and look at our container instances. We've got a worker side test here, and we could fire this up to see how this is running. So we could start that container right. The expectation would be that our logs would show the same thing as the actual container instance running, which is great. So now we can take a worker service that would run, and you could expect that we could build those to take some things off of a queue and do some work and then shut it down as we need to, or just have it long running process. So we'll let that fire up and go on to the next thing here. So endpoint routing is another feature that we have, that basically as we were building out previous APIs using MVC, some of the feedback that we got was, well I don't really want the V as a part is my MVC, and what I want to build is just the actual endpoint here. So I would like to build a lot of APIs for my microservices that are out there, and really all I want to do is just put an endpoint on that work, on that microservice because it's just doing specific work on a specific endpoint. So as we saw with the health checks, that's also using this endpoint routing as a way to return a response on a specific route. So here we're just adding App.UseRouting, and then we're saying use endpoints and mapping that slash hello to a response, and this is just part of the simple web template, that if I go to slash hello, it'll return our hello world. I don't have to set up the controllers or the models or the views as part of MVC. So this allows me to do endpoints and services based on just a path as opposed to setting up the whole MVC without doing that. So if I go into back to my Ubuntu instance here, you can see what that looks like. We can just do .NET new, and we can say web, and we'll do a simple web output here. Let's see what that looks like. So if I go into my startup, we've got our endpoints use routing, we're mapping this here, and again, we could just come down here and map another endpoint if we wanted to. Same way. Map endpoint. Nope, we don't want to do that. Thank you. Map endpoints, and I can do slash chain if I wanted to, and then have a response as well, and then do that same type of response for whatever endpoint you wanted to. It's very simple. All right. So the last thing that we've got, is our configuration. So .NET Core configuration is based on the key value pairs. We have a lot of providers and the configuration providers read configuration data into those key value pairs. Since 2.0, 2.1, we've been able to use JSON, XML, any environmental variables. Now we have Azure App configuration as an option, and there's a lot of other custom configuration providers as well. So why is that important for microservices, and that really comes into Kubernetes. So Kubernetes provides two primary mechanisms for configuring their apps, and one is config maps, and another is secrets. So the secrets here is basically what we've been doing in ASP.NET Core, where we don't want to deploy our database connection strings and other secrets with our app settings. But we want to create a file and be able to put them into a location that's private and secure. So in Kubernetes, we can create a secret with a key and also pass that file in there, and that'll create that generic secret on the cluster or pod that can be used by the application. So in the actual deployment YAML file, we'll notice here that we've got the key points here and the volume we're creating here. We're creating that secrets back. So if I go back to the previous slide here, what we're doing is we're using the file secrets.json with that secret connection. So if I'm connecting to a database, I'm going to create that on the pod, and then I'll reference that in that volume that I'm creating on my Kubernetes cluster on the pod under slash app slash secrets, and then using that as a key value. Now in my ASP.NET Core application, I'm going to use that add.json file in my configuration builder in order to reference that, and now it's going to be available to me on my pod. Now the good thing about that is it just works. I don't have to think extra about how I need to reference that information in my Kubernetes configuration as a developer. I'm just doing it as it makes sense to me with any other configuration. It's the same as in any file or XML file. I'm just adding it to a path that I think or should exist on the actual infrastructure that I'm deploying to. So we're going to look at this example here. If I just open up full example. Actually, let's go back and check on our Sci-Car test here. Looks like we should be running. If we look at our logs for this container, let's refresh that. Our logs are available. We can actually go check our logs here and refresh them, and now our worker is sending back our logs, which is pretty great. All right. So now let me look at our worker here, too many CDs. So awesome. So we'll open up this app, and this is a full worker file that basically will work as a sidecar for us. But what I want to show here is in our, sorry, that's the wrong app. We want to actually look at the web application that has our configuration in it. So in our web application, we've got our Kubernetes deployment files. That's where we're mapping all of our configuration. Here we go. So let's see if we can get rid of this. Open this a little bit more. All right. So here's our Kubernetes file for our deployments, for the actual web application, and then what we also want to look at is the worker, and the worker file is using that configuration to where we're setting some environment variables. We also have a secret already set up in order to set our configuration to an Azure table storage that's set up as a secret as I've shown in the on the slide before. So what I want to do now is we'll walk through and look at setting up a worker that I've set up as a container. We deploy the ACR and that worker we've set up as a way to work as a sidecar to monitor that healthy endpoint. You'll see that here in our values, that this worker is configured to hit a healthy endpoint and report back as a log washer, basically to make sure that we're up and running, and it's going to report that into table storage for us. So this is an example of a worker that's just set up in a container, kind of as a long running process, very similar to maybe a Windows service that you might run under a box under your desk to as a watchdog service to make sure your apps are up and running. So what I want to do now is take all of this and basically push it up to Kubernetes and make sure we're up and going. So just as we go out to our configuration and we need to clear this out. So now I want to deploy all this out to my cluster, and if the Cloud is working great for us today, let's look at our cluster, make sure we're running. So we've got no services currently running out there. So now what I want to do is first apply our worker service. Actually, let's actually put the web out there first. Deployment, so that's good to go, and then now we're going to actually set out the worker service. The deployment will deploy our web service out there. Now we want the worker service to be out there. So it's going to check and see if it's up and running. All right. So it's clear, and then we'll actually look at our services. All right. So it looks like simple web is being loaded, we're waiting for that public endpoint, and the worker sidecar is also running. So what we do is we'll do kubectl and we'll get services and see if we get a public IP. Look, our public IP is running. So now we can go look at our super fancy web site. Make sure that's up and going. All right. So hello, world is working us. Fantastic. Check our health endpoint because that's what our worker service is going to hit. Oh, healthy can be found. This might be a good test for our worker service. All right. Now we can go check our logs. Let's take this down here. So our logs are actually going to ride out to our table storage. It's already written 231. So we go all the way down to the bottom. We'll do a refresh. You can see here it's debug, it's starting. You see our worker is running, it's hitting our start. We're getting some information here. Let's go back to the end. One more refresh. Go back to the last log. And it looks like the web app is good to go. So now we've got a sidecar worker service that's really not related to the actual application. You can imagine we have multiple applications on there, multiple services to check those health endpoints. Make sure we're up and going. And now it's hitting an Azure table storage using Serilog. So we're using the other services that are available to us to build these applications. And all running in a Kubernetes cluster. We just make sure that we're still going here. Control here. And make sure everything is up and going. Awesome. So now I have my super awesome Hello World app. It's being monitored by a worker service. And I'm getting logged in at our table storage. All right, let's go back here. And that takes me to the end of my time.