 Hey, welcome to the session on.NET Core and Service Fabric. I'm Vostov Turchek. I'm Sidon Rahulie. We are program managers on the Service Fabric team. So today we're going to talk to you a little bit about how.NET Core applications are written on Service Fabric, and we're going to dive a little bit deep into some of the tips and tricks that we use to develop applications. We're going to talk a bit about first just the basics of.NET Core, how we write.NET Core applications, how they're structured, how we use the visual studio tooling, and then we're actually going to dive right into CICD using Azure DevOps, which has been a pretty top topic at.NET Conf this year. So we're going to show you how Service Fabric works with Azure DevOps to do continuous integration and deployment, kicking off builds and upgrades and the like. Then we're going to also dive into a little bit about monitoring with App Insights. We're going to show you how we monitor performance counters as well as traces and a little bit of searching for different kinds of traces that you get during upgrade scenarios and the like. Then we're going to move on and show you some of the advanced techniques we use to scale in and out Service Fabric applications for.NET Core. Then we'll wrap it up with a little bit of the future stuff that's coming out for Service Fabric with the focus on the.NET Core work that we're doing here. So with that, let's just jump right on in, and we're going to go right over to doing demos. I think I'm starting on the demo, so let me go first. Okay. So let me show you what we have. So we're going to start with a application that we've probably shown to you in the future or in the future. We will show you a better version of the futures than I mean. We've shown you this in the past. So this is actually our quick start application. This is the voting app where we have a very simple back-end, very simple front-end services, about both written in.NET Core. What I'm going to do initially here is I'm just going to walk through the basics and the structure of how this application works. Just to get you familiarized with how.NET Core applications are written on Service Fabric. If you're a Service Fabric veteran, this will be a lot of review for you, but do pay close attention though, because we're going to show you how some of this stuff is changing and evolving and improving in Service Fabric in the future. So let's start with the very basics here, right up at the beginning. So we have two services in this Visual Studio project, and I'm going to start with the back-end service. So this is a stateful ASP.NET Core service, where we inject reliable collections, which is our built-in replicated data structures and data store for Service Fabric. We're going to show you how this project structured and how you do ASP.NET Core on Service Fabric. So right up at the beginning here, this is where things get interesting right away. You can see right as we jump into the main entry point of the program, we already have some Service Fabric concepts to think about here. So the way this whole thing works is every service that you run, whether it's in a container or not, always has to have some host process. So this is just a regular process executable, whatever it is, and that actually runs your service in it. So the very first thing that you have to do when this process comes up, is you have to tell the Service Fabric runtime, hey, this thing's a host process, and I want my services to run in here. So what you see here, the very first thing that we do is we register what we call a service type. Now, service type is a pretty important concept, and this is something we're going to revisit later. But this thing here, this essentially says that I have this compiled set of code and config, which represents the service and all the bits and everything that need to run the service, that's what we call a type. It's a similar concept to say a class in a object-oriented language where you define the class once, and then later you can go and create as many instances of that class that you call objects at that point, and you can just create instances of those. So that's all this line of code here is saying, this host process can run this service type, and you can instantiate in here. Once you've done that step, then you can move on to the actual service class itself, which is what we see here. So you can see this is a class that inherits our stateful service-based class, and again, it's another service fabric concept. So what's happening here is every time you create an instance of the service, it's going to instantiate an instance of this voting data class, the service class inside the host process, and that's what that registration thing was all about. So at this point, you're still bootstrapping your service and your application to run in a service fabric environment, and this is what we collectively call the Reliable Services Framework. Once you get down to this point here, this is where you're opening up listeners and communication endpoints for that service, so other clients and other services can connect to it. This is where we actually start up ASP.NET Core finally, if you're using ASP.NET Core for your application. So now this is where it becomes a little more familiar if you've done ASP.NET Core. Now you can see the web host comes in, you start bootstrapping Kestrel, you start adding MVC and you start adding App Insights and all like goodness. You can see where service fabric plugs itself in here. We inject reliable services in the state manager as a dependency into the built-in dependency injection system in ASP.NET Core. So by doing that, once you get into the actual nuts and bolts of your application like the controllers, you'll then get access to reliable collections directly within those controllers. So once you're at this point, you're back into the land of just a vanilla ASP.NET Core application, and then you just go about writing ASP.NET Core as you normally would. Okay. So I'm going to launch this application here in my local cluster, which may or may not be running at the moment. I had to do a quick reboot before I came on here. So let's see how this goes. So since this is an initial launch, when I hit F5 or control F5 in Visual Studio to get the debugger going and get it to launch, it's actually going to spin up that local development cluster on my laptop here. I've got to configure to just use a one node cluster and I also have something configured here, which is the in fact the default called refresh mode and that's the type of debugging mode in Visual Studio that'll allow you to just make changes directly into the application without having to redeploy it every single time, you go and make some changes to it. So this is especially helpful if you're doing web development, where you need to make frequent changes to HTML files or JavaScript files or CSS files or what have you. You can actually just make those changes directly in your Visual Studio IDE without having to redeploy the whole thing. So that really fast development that you're used to with web applications where you make a change, you hit save, you refresh your browser, and the change shows up. You can do that by default in just the default Visual Studio service fabric tooling configuration it's set up for that so that you can get that really fast develop cycle. All right. So it's going to take just a moment here to spin up the cluster. So while it does, I'll walk you through a couple of the reliable collections lines of code here and how this stuff works. So the basics of it, as you can see here in this put method, this is just a regular ASP.NET Core web API controller. It's actually fairly simple to use. So we have this thing called a state manager, and this is the thing that manages the lifetime of reliable collections. So you ask this component to go and grab a reliable collection for you. The first time you do that, it actually has to go and do an operation that replicates out to other nodes to register that collection on the other nodes, and that's where that replicated HA part comes in. So that's why this is an asynchronous operation, and you see we have this await here because that first time it's doing it, it's doing this replication operation. Every subsequent call to this is going to be really fast because once it's created, it gets cached and you get it back very quickly. So the recommendation here when you're doing this kind of code is to actually not cache these things if you don't have to, but it's actually better to just do what you see on screen here right now, which is call that state manager every time, because every subsequent time should be very quick. This ensures that you get that reliable collection created the right way every single time. If you end up caching this in a private member variable, then you could get some strange behavior depending on timing and when the code runs, and whether or not you're listening on secondary replicas and all that kind of stuff. There are a few more caveats in that case. It's usually better to just call this every single time. Now in the next lines of code, you can see this is all transactional, so you just create a transaction off that state manager, and then you can go into your operations on the collection and then commit your transaction like you normally would. The important thing to remember though, and here's an important tip for you. If you're using these reliable collections, you have to make sure that all the values you put in there are read-only, so you never want to pull a value out of a reliable collection and just make changes to it in memory. What can happen if you do that before you actually commit it, you're actually changing the object in memory, but if that transaction rolls back, we don't roll back the change that occurred in memory. What'll happen is you'll have secondary replicas that have copies of the data structure that have not been changed yet, but because you change them locally in memory, you'll be operating on a brand new change data structure. So if a failover happens, suddenly that data will be lost and the client that sent that data to you originally won't actually realize it. So you have to make sure that everything you put into this reliable collection is read-only. So that's a very important thing to do. Ideally, what you want to do is do a deep copy of these objects. Every time you pull an object out and you want to change a property, do an entire memory copy of that whole object before you put it back in and commit the transaction. Okay. So I think the cluster is up and running here. I'm going to check real quick here in the cluster manager. I should be able to actually just hit it locally here. Let me open up a new Edge browser, and we can just go to local host on port 19,080, and that'll open up Service Havoc Explorer. I got three applications running. One of them is there, but we don't need to worry about that one. That's not the one I want to show you today. So if I go straight into this application here, you can see it's got two services that started up automatically. That's the front end and the back end, and I can just go and drill down into this web service here real quick. If we get down all the way into it, you'll see the endpoint that it's actually listening. Okay. So that's my full computer name there, and it's on port 8081. So I can just hit that on local host port 8081, and it should go ahead and load up that application. Once the web stack kind of spins up here. Okay. So while we're waiting for that, there it is. Great. And so now I can just, so like I said, this is a voting, this is kind of a voting sample. This is our quick start. You've probably seen this before. You can add some values here, and then you can vote on them. Cool. So it's pretty simple, and we're using this real simple one just because we don't want to get too much into the details of the application itself, because we want to show you some of the cool stuff around it. Okay. So I think we should probably show how we deploy it through a CI-CD environment. Yeah. Let's show where we store this code and the CI-CD pipeline. Let me just plug in. And so we'll be showing Azure DevOps here and how we integrate service fabric into DevOps. Yeah. So on the screen, you can see that I have an Azure DevOps project open, and within it, the things that we care about, particularly our Azure repos and Azure pipelines. So if you go into Azure repos, you'll see that we stored Voss's Quickstart application as a Git repository on Azure repos. In addition to that, we also have the ARM templates that we use to deploy the cluster, and also all the artifacts that we use to spin off the build and release pipelines in Azure DevOps. Now, all of this will be shared at the end of the talk, so you guys will have access to this entire repository. But in addition to the repo, we also need to think about the pipeline. So when it comes to CI-CD, there are two stages in Azure Pipelines, build and release. So before I show how to create a build pipeline, let me just go ahead and kickstart a build so it runs while I show you guys how to create a new one. So I'm gonna go ahead and queue our existing build, and I will describe what this will do as I create a new build pipeline. So while that runs, I'm gonna go ahead, start a new build pipeline, and instantly you'll see that I have a variety of options to choose the source code from where I'm gonna build. So we have stored all of our code in Azure Repos Git, so I'm just gonna go ahead and choose that. You can pick the branch that your source code that you wanna build off of, and because we already have a build on master, I'm gonna pick one of the other ones so it doesn't interfere with the existing build. Go ahead, continue, and then there's a template. So here you can click and select a variety of templates for the type of application or the type of solution that you're building. In Azure DevOps, particularly in build, we already have a template for Azure Service Fabric applications. So I'm gonna go ahead and apply that, and you will see that there are a bunch of tasks that run as part of this pipeline. First, there's an agent job. So the agent pool is hosted VS 2017. What this means is the build is run on an agent with all of the dependencies for Service Fabric applications already. So you don't need to worry about any machines that and all the dependencies for a Service Fabric application. That agent is already existing and is hosted in this hosted VS 2017 agent pool. So there are a bunch of tasks that run on the left side that you can see. Use NuGet, NuGet Restore, and the build solution and build SF product solution is essentially what Voss did on his local machine. So on his local machine, when he did the F5, it built a solution, it restored the NuGet, it's built a solution, then built the SF project. Essentially, this build in Azure Pipelines is automating all of that for you. And then there's an update Service Fabric manifest portion which will take care of updating your Service Fabric manifests if there are any changes to it. So each time your repository is built, if there are any updates to the repository, which is what this check mark does, it'll go ahead and update your Service Fabric manifests. The last two stages in the build pipeline takes care of copying the artifacts or the results of the build and moving them into an artifacts folder. So that takes care of the build pipeline and that is exactly what we just queued off. The final thing that you'd care about in the build is triggers. So in order to enable continuous integration, you need to go to the triggers and select a branch to enable continuous integration off of. So here, we're gonna go ahead and specify that the branch that we wanna do continuous integration off of is my original branch. And with that, that concludes the build pipeline and now I'm gonna show you guys the second part of Azure Pipelines that we care about, which is the releases. And you'll see that there are two releases on the left side and this has to do with Azure DevOps being a preview feature. So the releases star is actually the preview version and we're gonna go ahead and show the releases that is available today. So once the build is done, the second part of CICD is to take the artifacts of the build and actually release that to some sort of endpoint. So in order to create a release pipeline, I'm gonna create a brand new one and once again, you'll see that you're presented with the option to select a template. So we're gonna go ahead and look for a service fabric deployment template which exists and we're gonna apply it. Now you have to name your stage something and since this is a deploy stage, I'm gonna go ahead and call it deploy. Now, this release pipeline has two stages to it. The artifact stage, which is what the release pipeline is gonna pick up artifacts from and actually the deploy stage. So before we can actually get to the deploy, we need to select the artifacts that the deploy stage is gonna get all its artifacts from. And the artifacts that we wanted to pick up from are the build that we just configured. So we're gonna go ahead and select the brand new build that we just made. Edit. And now this deploy stage is configured to pick up these artifacts for its deployment task. To configure the deployment, you need to specify the cluster connection. Now this is the cluster, the service fabric cluster in Azure or any on-premise setup that you wanna connect to. So the credentials that you need to provide, a connection name which is generic, the cluster endpoint of your cluster, the thumbprint of the certificate that secures the cluster. So this would be the cluster certificate that actually secures the cluster and locks it down. And then the base 64 encoding of that certificate. Now we've actually gone ahead and created this already. So I'm just gonna select the one that we already have. And you'll notice that similar to the build, there's an agent job that runs. And this is once again the hosted VS 2017 agent. This agent is once again configured with all the dependencies that is needed to deploy a service fabric application. So in the second part of Vasa's demo, where he did the F5, once he built his VS 2017 project and solution, it actually went ahead and deployed it to the local cluster for him automatically. This agent job has all the dependencies that he needed in order to do that already in an agent pre-configured for you. So let's go check out what the deploy, service fabric deployment properties are. So in here you'll see that there's a published profile. The published profile is contained in the artifacts folder and this has all of the settings for your publish. You can also override and specify application parameters. Now the thing of note is upgrade settings. So in upgrade settings, you can specify what happens when you want to upgrade or when the deployment that is being made is for an upgrade. To show this, I'm actually gonna override all the published profile upgrade settings. So if you didn't do this, the upgrade settings would come from this published profile, but because I wanna show exactly and configure them myself, I'm gonna go ahead and override them. And you will see that there are three different upgrade modes, monitored, unmonitored auto and unmonitored manual. Monitored means service fabric platform will actually monitor the upgrade and it'll roll it back if there's any issues. Unmonitored auto means the service fabric platform will not monitor your upgrade and it'll just blast and upgrade. Unmonitored manual means at each upgrade domain, when an upgrade finishes unmonitored, you have to go ahead and click to continue the upgrade. Failure action, there's two types, rollback and manual. Rollback will only apply if you have the monitored upgrade mode. And what rollback will do is if there's a monitored upgrade and it fails at a particular upgrade domain, then service fabric platform will automatically rollback that upgrade for you and make sure that your application faces zero downtime. Manual is only available in the other upgrade modes and what this means is every time there's a failure in an upgrade, you will have to manually go in and decide what you wanna do. All of the other settings include timeouts and when you actually deploy on your local machine or using our CLI or PowerShell, there are a bunch of parameters that you can pass in when you're doing deployments and these essentially encapsulate those in Azure Pipelines under releases. So that takes care of releases and what I'm gonna do is I'm gonna save this release folder and the very final thing you wanna do in this release is I wanna enable continuous integration. So I want this release to be kicked off every time my build completes. So I'm gonna go ahead to my artifacts and enable the continuous deployment trigger. And with that, I actually have a complete build and release pipeline. I have an Azure Pipeline that now takes any new changes that are pushed to my branch, builds it on an agent pool that's already pre-configured and then will release to a service fabric cluster. Nice, yeah. So a lot of those options that you saw in there in the upgrade settings actually might look familiar to you if you're used to some of the command lines in service fabric. So the things that Sidano was talking about, the upgrade settings, whether it's monitored and the rollback settings, all that, it's basically just running the same PowerShell commands or the same CLI commands that you do in your own deployment environments or also that Visual Studio uses. So it all kind of funnels down to the same set of commands that are being run. They're just different ways of doing it. Yeah, so you wanna show the app running in Azure? Yeah. Yeah, so if you go to the screen again, you'll notice that in service fabric that's running on Azure, the service has been deployed. This was the build that I configured initially and you'll notice that it's identical to the one that was deployed on his local machine. Fresh. You'll see that there are two services, the voting web, which contains five instances and the voting data. So it's the same one that was deployed, now it's running in Azure. And if I hit the endpoint in Azure, it's running. I already put up two options. I see. I voted for you. Okay. Sidano are a piece of cardboard, I think I'll move for myself. Thank you, thank you. Yes. All right, so that takes care of the Azure DevOps. I'm just gonna go back to repos and we actually figured that the quick start that we have today isn't the best version of that quick start. I mean, it's pretty good, but I opened a PR for you. Sounds good, so let's go check out that PR. So in Azure DevOps under the repos, if you check out pull requests, you'll notice that Voss made a pull request to extend the quick start application and make improvements to it. Now I trust Voss and I'm gonna just don't know why. Prove this, gotta hit that. I voted for you, that's why. Yes, yes, that's why. Prove this. And you'll notice that when I complete and merge this PR into master, the build that I created will trigger automatically and it'll do an entire service fabric build and release to the cluster while upgrading my existing application. So let's go see if the build got triggered. It has. So now I'm gonna hand it back to Voss who's gonna walk through the extended voting application code and explain how it's different and what improvements he made. Probably should have done this before I approved the PR, but let's check it out. Oh, it's good, I promise. I trust them. Here, plug me back in and I'll show you all, I'll show you what it is. All right. So yeah, so we made a few changes to this. Now, the original voting app that we showed admittedly is a little boring. It's not the most interesting thing in the world, but it gives you kind of a baseline and somewhere to start. Now the main problem with that initial voting app, that backend service was a stateful service and it was partitioned, but the voting app only represents a single poll. So I can put up one poll and I can put up as many candidates as I want, I can vote on them. And so by partitioning that thing out, I can scale that backend service out and I can put up a lot of candidates in that one poll, which is fine. The problem is that that backend service, that partition count that you set up initially is fixed. So once you pick a partition count, you're basically stuck with it forever. So the way we get around that is there's an interesting trick that you can do in service fabric and this is actually a fairly, it's a fairly common architectural pattern that really takes advantage of what you might consider the microservices pattern is. And so instead of just doing a single service, what we've done is we've extended this voting application with just a couple of simple modifications so that you can now actually create multiple polls and there's a little trick to that we did it. So here's the updated version. Let me run this again real quick just to get it deployed and I'll show you what we've done. So the first thing I wanna point out is up in the application manifest here. Now if you've used service fabric, you've probably seen in your application manifest, you come down here, you import your services and then you have this thing called default services and you've seen this, you've probably configured some of your settings for how you want your service to run and you put it down in this default service thing. Now, default services is kind of a shortcut. It's not really the way that you would typically create service instances in an application. So when you spin up the application in a service fabric initially, normally it's empty and you go and you then programmatically create service instances inside that application. When you specify default services, what that does is it creates that service instance automatically when you create the application instance and this is good for your local debugging. Visual Studio kind of uses this to attach a debugger to that service automatically for you and it's pretty convenient to use locally. When you get into more complicated application scenarios or even when you're running a production, you typically wouldn't do it this way. Instead what you do is you'd have your build and deploy pipeline or your CICD environment then run an extra command that then says go create my service instances and the reason you do that is you can then go later and update those service instances kind of on the fly using an update command and you can also do this really cool trick where you can create a service instance programmatically from another service and so that's what we've done here with this voting application. So let me show you how that works. So if I go back into service fabric explorer just on my local cluster here and I have this application deployed now you can see that I only have actually the web service running. I don't have that backend data service running but if you look down here on your service types and this is that service type concept that I mentioned at the very very beginning of this which is pretty important which is where you register with the cluster a set of you know set of binaries and configs that make up your service. So we call that a service package and service package has everything needed to run the service instance. And so you can see down here I have these two service types registered inside this application and that registration means that service fabric the cluster itself is aware of the fact that there is a service type it has a version and at any time I can create instances of it. And so you can see this one here the web one came from the default service that we had specified here and you'll see that I actually don't have a backend stateful service running at all right now. So what I did is I made a little change to the service and now what you can do is if you go to the website again and you can now put in for example the name of a poll that you wanna create and what this is gonna do is it's going to hit this home controller and it's gonna execute some code here you can see this is my index now and it takes in the name of poll and what we're doing here is we're actually using the service fabric client to go and instantiate a new service instance that's create service async call and we're saying create one of this type with this partition scheme and this number of replicas and so on and so forth you can parameterize this basically however you want. So every time there's a request for a new poll an instance of the service is gonna get created and you can see here on the backend this is the wrong one here let's see that's over in Azure I'll link it back to my local cluster here here we go okay so I'm back in my local cluster so you can see this is the poll that was created so what I did is I constructed the name of the service and put the name of the poll right there into the service name and so then when I go to that poll here now I can just put in say for example cats and dogs and whatever I want and this will then hit the reliable collection in the backend and create that poll so then if I wanna start another poll for example I wanna vote on cars now it's just created another poll for me and what this does is it created an entirely new service instance to represent that poll so now what I've effectively done is I've created a way where I can create as many polls as I want and I'm loading service fabric actually do the job of tracking those services that are running those polls and the cool thing about that is I don't then have to go and write my own UI or management or anything to actually keep track of all of those services I'm creating because they're all just listed right here they're just more services and so anytime I wanna end the poll I can just come in here and I can delete it if I wanna keep track of a poll I can come in here and look at its status and even better than that every poll I can create I can give it different parameters so when you saw this code in here so every poll that you create can actually give different parameters so if I have a poll that's gonna have a lot of data that's gonna hold a ton of votes or a ton of different candidates I can actually create it with more partitions so this overall voting application now scales out per poll so I'm not really limited by a fixed number of partitions I can always create more service instances on the fly and so effectively your scale out is pretty much unlimited in terms of the capacity that you set up upfront it's just a matter of do you have enough hardware to manage all the data and all the traffic that you want coming in but the architecture of it allows you to just keep scaling out and out and out as much as you need so this is a fairly common pattern that we see with ServiceFabric because it has this unique ability to create these service instances on the fly now the reason that we have these types these service types in here is because the services I'm showing you right now are actually just running EXEs they're not actually running inside of containers you can do the same thing inside of a container as well it wouldn't change a whole lot but because ServiceFabric has the ability to run services that are not inside of containers you have this problem of packaging up all those binaries and you have to do something with them you have to put those binaries somewhere so they can be provisioned to the cluster and so there's a stage when you're deploying in the CICD environment that Sudanova showed you in DevOps does this for you automatically along the Visual Studio there's a provisioning stage which copies all those bits up onto the cluster tells the cluster about this type and tells the cluster about the version of the type and then once that's up there you can just go and instantiate these things as much as you want so it's kind of a thing that's very unique to ServiceFabric the other, the only other change we had to make was just here on the front end we had to make a little change to the JavaScript here you can see we've just added the name of the poll now into this JavaScript so when you're clicking, you know, vote or something like that it goes to the right poll that's the one thing that we added and then we just changed the routing a little bit down here in the ASPnet core MVC routes you can see we just changed this up a little bit but that's really all it took the interesting thing is that the back end service the data service actually didn't change at all we didn't change any code whatsoever in the data service it stayed exactly the same way because we're just creating more and more instances of those so the code that we have to write for the data service is actually really simple because there's no need to keep track of multiple polls in there this service only has to keep track of one poll so the code you write just keeps track of one poll and you just create multiple instances of those on the fly so it makes your code very simple, very straightforward but the architecture that you get the architecture that you can create with this gives you that scale-out capability which is real neat all right so I think the how are we doing on that upgrade? good I think the upgrade's kicked off so I can show what it looks like in Azure Pipelines cool, oh yeah, just switch yourself back in there okay so yeah so we showed you what the application does with that what the modifications did and so we did a rolling upgrade we did by merging that pull request so that should be running in that Azure cluster now yeah so I think what we're gonna show is we're gonna show that in Azure Pipelines the build that was kickstarted automatically is complete so you can see on Azure Pipelines under builds the build that was triggered, completed and it moved all the artifacts removed all of the service fabric bits into the artifacts called drop which was then picked up by the releases that we configured and that releases is still running so let's go ahead and check out the logs of this release all right so all this thing loads you can show service fabric explorer which is actually showing this upgrade so in service fabric explorer you'll see that the upgrade that we're doing is actually faulty so the upgrade that was started in releases runs the service fabric upgrade monitored and rollback so it'll upgrade one upgrade domain at a time and you will find that it's stuck on upgrade domain zero and what service fabric the platform will do is it'll recognize that the upgrade and because we set monitored rollback it'll recognize that the upgrade is faulty and it won't even get past upgrade zero and this upgrade will rollback and we'll show that as soon as we go and check out the logs and the native doesn't seem to be loading so when upgrades like this fail or anytime you do any sort of cluster level operations one of the other things that you care about are also your logs and we've actually configured this cluster and these applications with application insights so if you go ahead and check out the application insights overview within application insights we're gonna show two things we're gonna show how you can get metrics so perf counters on the cluster the performance, the memory, the CPU usage and we're also gonna show how you can query the analytics store for things like traces and specifically upgrades so you can recognize and figure out where in the upgrade what went wrong what's happening at the cluster level so if I click on metrics you'll see that there are two metric namespaces the standard application insights metrics and these metrics are the ones that are automatically configured by application insights for you so underneath this you'll be able to select metrics like available memory so if I click available memory I can see a graph of available memory so I can see dip and spike back up and I can see things like CPU usage and Voss in his arm template which he will talk about very shortly also configured some custom metrics that would be useful for service fabric stateful services so because the stateful service has a lot of writes and reads from disk we wanted to get details on disk reads and disk writes so you can see that the disk metric will tell you the amount of disk reads per second and there's a metric for disk writes per second which spikes anytime you submit a vote a new vote or you give a new option in so the upgrade that we deployed failed so if you are interested in checking out what went wrong with that upgrade you can use the application insights overview once again this time we're gonna go to the analytics tab and here you'll see that really what we're doing is we're querying a data store that is already provisioned for you in app insights called traces which has a huge dump of all the traces generated by service fabric the platform this can be pretty verbose and hard to read as a result of this we actually have a document that shares all of the service fabric event IDs for the traces now the event that we're particularly interested in is the application upgrade event so I'm gonna filter all of the traces on this event ID now if you look at the traces the event ID shows up in a column called custom dimensions so I'm just gonna modify this query to give me only the traces for that particular event ID so now I'm gonna see the trace for the single application upgrade that we deployed now if I wanna see all of the events that actually happened with this application upgrade I can actually grab something called the deployment name and this would show all the traces for that particular deployment so you can see that an upgrade started you can see all of the things that the platform did with respect to that particular upgrade so you can see traces like run async has been invoked this means that a new service has started and you can see upgrades have been started and this kind of gives you a more filter view on that particular upgrade and you can see the traces for that upgrade very cool so it actually looks like the upgrade is going through now so whatever issue there was is healed itself which is kind of nice that's awesome I don't know what you did but you should have looked at my code you should have looked at my PR a little bit more closely yeah so it takes a little while to get those traces out there but eventually they'll kind of come in and we should be able to see a little more information about any errors that occurred usually what will happen is we'll kind of retry we'll restart the process if it crashes which can happen sometimes run it on a different node and so what you see is between each upgrade domain as the upgrade rolls through there's a period of time where service fabric waits to see if the health either stabilizes or if it stays in error and if it stabilizes it'll continue on then to the next upgrade domain so that's what we saw here now we're going on to upgrade domain three it looks like upgrade to me three yeah oh no it's rolling so we should be able to hit this endpoint in Azure and yeah every soon so let me show you how that ARM template is set up in order to get app insights configured that way here pull me back in real quick excellent alright once that comes back up let me show you my template cool here we go so this is actually fairly simple to set up and I'll show you what I've done so here I've got the ARM template open in Visual Studio Code so if you come down into the ARM template here when you get down into virtual machine scale set so this is the underlying infrastructure in Azure, the VMSS that the cluster actually runs on so this provides the VMs the service fabric software that stitches the cluster together that is set up as a VMSS extension and so that's what you're seeing here so once you get down into this extension you'll see the service fabric extension and a few settings, et cetera once we get down here however to diagnostics you'll see now I have Azure diagnostics set up right here and inside this WAD config we've got an application insights sync and so the sync is basically telling us where do you want those traces to end up once they get written out from the service fabric cluster so if scroll down a little bit further here you can see the configuration for the sync now what I did here is instead of putting an application insights key and one way you can do this is you can set up application insights ahead of time go grab the key which is a GUID or some string like that and then you can paste that into here the way I set it up here though is I actually have the application insights resource as a part of the same ARM template so when I deploy this ARM template not only does it create the service fabric cluster it also sets up app insights simultaneously and then what I do is from that resource you can see here I'm referencing that resource ID insights and then I'm just grabbing the instrumentation key out of there so I never have to go and grab the instrumentation key myself and put it in here and worry about encryption or anything like that it'll just happen automatically for me once you have that set up you're pretty much most of the way there then it's just a matter of setting up your ETW event sources so these event sources are the ones that we have set up for the service fabric infrastructure, the system so to speak and then the application platform part of it so if you're using reliable services you can set up that ETW event here and that'll give you the things like RunAsync has started or RunAsync has canceled all those kind of traces so the application I wrote actually doesn't have any log output whatsoever which is terrible but despite the fact that I didn't do a good job of putting in any sort of logging we still got pretty rich traces coming out into application insights that gives us at least some indication of what was happening during that rolling upgrade we did so I can at least see how it's going through upgrade domains how long each upgrade domain is taking and any errors that happen to come up now the errors that you'll see in App Insights in this case will be only the errors that service fabric can detect and that's fairly limited to things like the process crash or there was an unhandled exception anything within your code that's happening the system obviously can't see there so you'll wanna make sure that you have pretty decent log output if you're doing .NET on Windows it's fairly simple you just write out to ETW and you can even use the App Insights SDK for that to make it pretty easy did I just approve the PR where you had no logs? You did my friend I didn't have unit tests either so yeah I'm just saying man come on that's rough but luckily we had App Insights set up so we can at least see what's happening in there and then the performance counters as well so these are the performance counters that I put in and this is just kind of copied straight out of Perfmon locally or whatever you want you can just grab these are the custom ones yeah these are the custom ones so we had a bunch of Perf counters that were just kind of added automatically by App Insights these are custom ones I put in to kind of monitor disk activity and process and memory now here's an interesting about the way I set up the process one here so I wanted to see the processor time for just one specific process on the machine I don't wanna see processor time for the entire machine cause that doesn't really help me much so what you have to do to get this set up but this gets a little bit tricky is you have to put in the name of the process into the parentheses here which does mean that you have to know the name of your host process ahead of time so if you go back into Visual Studio here you can check that out if you go and just say you know look at the properties I have my aim is just off today go into the properties here and I can see the assembly name so that's gonna be the name of the EXE so I have to make sure that matches what I put in here and obviously here it doesn't and so we're not gonna see that Perf counter out in App Insights so if I wanted to monitor both the processes I would just have this thing twice you actually only have, yeah, yeah if you were gonna monitor for both services instead of different process names you just put in two of these and you'd put in the different process name for it the one thing that is not supported here unfortunately is I can't just do something like this I can't just do voting star now that's something you normally can do outside of Application Insights but unfortunately it's not supported here in Application Insights so just be aware of this one if you try to do this asterisk to say I want any process name that starts with voting this will not work you won't get any performance counters coming into App Insights if you do it this way so you have to actually put in the full process name like so and then you'll get those Perf counters coming in into App Insights so that's kind of the basics of it that's how you set that up as long as you have your sync configured properly and you have these Perf counters you should be able to see all that data over in App Insights Sounds good How's that upgrade doing? Upgrade is good and we got the application up and running so if you can hit the end point Nice, let's check it out Yeah, let's see You'll actually be able to see the app running in Azure Cool All right, let's give it a shot So this should give me a 404 if I hit this Yep, great That's good Perfect, okay So now I should be able to go and start a new poll We'll call this one Laptops And so there we go Look at that Create our new poll Let's go check it out So here's our Azure cluster and you can see down here it should have created an extra service for us So I actually went ahead and created the animals one which is what you guys are seeing I did, okay And you'll also be able to see Laptops There we go Cool There it is Nice, so we got that deployed in Azure looks like we still got an error state so we'll go back and check those logs in a bit and see if we can get some more information on what's happening there But the upgrade is rolling on through and it's going on to upgrade domain number three So until this entire upgrade finishes that release job in Azure DevOps is not gonna finish So that'll keep going That'll keep going And at some point there we can see those logs Exactly, yeah Cool Alright, so we'll come back to that in a bit Now, I wanna show you some of the work we're doing to make .NET Core applications on service fabric a little bit easier So there were a lot of concepts that we talked about service types We looked at an application manifest We looked at these service manifest imports It's kind of a lot to take in and it's very heavily tied into service fabric The, even the code itself if we come back and look at that again you'll see that the just the start of the code here even right at the entry point you're already tied into service fabric by having the service runtime and then you're registering the service So your code is very, very heavily tied into the platform So it's not very portable I couldn't take this exact same ASP.NET Core application and go run outside of service fabric So some of the work we're doing is to make that a lot simpler and to make applications for service fabric much more portable So I'm gonna switch over to this kind of new style of application So this is the exact same voting data back-end stateful service that we just saw And if I go into the controller you can see it's the exact same code that we looked at a little bit earlier It's the exact same thing I got a reliable dictionary I got state manager I've got transactions Everything's in here So I actually didn't have to change any code I could pick up all my ASP.NET Core code that I had in my old services and I can just dump them down into here and not have to actually change anything except maybe a namespace or two But the big difference, though if you look at number one, the project type this is just an ASP.NET Core project It's not even a service fabric project in this case, in this example And if I go into the program there is no service registration there's no type registration there's actually no service fabric code in here whatsoever This is actually a straight-up vanilla file new project ASP.NET Core.NET Core 2.0 Done, I didn't do anything else The only thing I did was in here I added user-reliable collections So this is the only thing I now have to add and you can see there is nothing else in here and I don't actually even have that voting data service class So that service class that inherited from stateful service and did a bunch of bootstrapping and set up it's just gone entirely you actually don't need that anymore when you're doing this kind of this new kind of service fabric application because it's really not even a service fabric application anymore at this point it is really just a ASP.NET Core service or an ASP.NET Core application and I've just added a NuGet package that has reliable collections in it and then I've said use reliable collections and I've got it available to me in exactly the same way as I did before So I can remove a bunch of that code that tie me down to the platform and I can still run it on service fabric Now you're probably thinking to yourself but you still have reliable collections in there so you're still tied to the platform, right? Fair enough So let me do this I'm going to take my local cluster here and I'm going to stop it so we're not going to run this anymore just to show you how this works, okay? So that's gone Now I'm going to take this thing here and I'm just going to run it as is I'm just going to go F5 and I'm just going to run this ASP.NET Core application So this is not being deployed to a service fabric cluster This is just running on probably IIS Express All right, you can see my local cluster stopped it's not running but the application that I just ran the ASP.NET Core application it will actually work and I think this is just running on IIS Express locally So wait for that to come up, here we go So I just ran that and I've already, I ran this once before just to test it out and I've already got a value in there so this is actually hitting that reliable collection completely on its own without running in the runtime and just to show you, we'll hit a break point in here so I put a break point on this put method for here where I grab that reliable collection out so what I'll do is I'll just open up my command prompt here and I'll do a curl and let's see if that hits There we go, hit my break point Cool, so you can see that's just running on local host here and you can see I can just step through this and it is actually pulling reliable collections out and using them and this is entirely outside of service fabric So now when I say this is truly portable it actually is and these reliable collections that are running here they're still transactional and they're still storing state down onto disk but because it's not running on service fabric it's not being replicated so you only have one copy of it so you can take this application you can run it elsewhere you can run it anywhere you want you just don't get the benefit of the built-in replication for service fabric that replication layer happens down below the application down in the runtime so you still need the cluster running in order to get replication but I don't need it necessarily just to run the thing so if I'm, for example, developing this application and I have, say, I want to give it to Sudanva to do some of the front-end stuff because I hate doing front-end JavaScript but he loves that stuff so I can just hand it to him and he doesn't have to worry about going in installing the service fabric SDK or running a local cluster he can just run it normally he can run it on his Mac because it's dotnet core and he can just iterate on it without having to install the runtime or anything like so it makes it very easy for other developers to just grab it treat it as a regular plain web application do development on it and then later on we can then deploy it to a service fabric cluster and have it run now that pattern that we saw earlier of creating services dynamically so since I don't have that service type registered how do I then do that? so you can actually still do that because there's still a piece here that defines how this application would run on service fabric if you were to deploy it to service fabric and so this is kind of a new style of describing applications it's different from the application manifest XML and different from the service manifest XML it's a very very simplified version of those things we still have the concept of applications and a set of services within that application but the big difference here is that you're no longer describing a type you're just describing the instance so the code that I showed you here earlier where I'm creating this service dynamically so this right here when I say create services async and I give it a bunch of parameters here these parameters are actually what I'm now describing declaratively here in this YAML file so I'm saying here's a container image I want to run that has everything in it this is an endpoint I want to expose these are environment variables I can set up the resource constraints for CPU and memory here I define my level collections, et cetera, et cetera and I can define a network that I want it to run in so I'm doing a completely declarative model here without having to register that type and the reason this works is because this newer model is all kind of based around containers so because we're defining everything in a container it means that when I build this application I have to go and upload that container image to a container registry first and so that was the problem that I was talking about earlier where if you're just deploying exes without containers which is what service fabric can do you have to figure out a way to package all those binaries up, put them somewhere and make them available for the cluster to provision them so that it can run now when you're provisioning things inside of a container you obviously use a container registry like Azure container registry or Docker hub to solve that problem and everything's contained in that container container I'm saying, I feel like I'm saying container a lot it's all contained in there so when you go and deploy it there's actually nothing to upload to the cluster anymore and so that's why we can do this this kind of new style where there's nothing to provision to the cluster I'm actually when I go and deploy this to service fabric I really only just deploy this one file it's just this YAML file or in JSON format either way and I'm only deploying that one file once service fabric gets that it goes and downloads the container and then deploys that container so it's a similar style to how you would do containers on service fabric today but it's drastically simplified and you can get all the benefits of using stateful applications and reliable collections and you get replication and everything without having to actually tie yourself down to the platform so that's kind of the neat thing here and you can see I have my Docker file here to create the container and then go and upload it somewhere and all of this is done for you in Visual Studio like the scaffolding for the project yeah absolutely and so Visual Studio actually has tooling for this which unfortunately I don't have installed at the moment but effectively you get the same style project that looks very similar to this where you can set up an application and put in as many services as you want and they will kind of do the scaffolding and do the container building and the container deployment and registration all that stuff for you and then you can deploy it out to your local environment or out into the Azure environment now why did we go and do this so let me switch back to here and show you a little bit more about some of the cool stuff that's coming up so let me get this going here we go and let me get down to here okay so here's the here's kind of the difference in what we looked at here we go so there are effectively three ways that you can write applications at this point so Docker Compose is sort of it's just a way to support existing applications if you're using Docker Compose today you can always deploy those two service fabric but the two that I want to focus on are the ones below so the application service manifest is what we showed you initially with the voting application and that was that style where you're defining types and you're using the reliable services framework and you're driving from base classes and this gives you a lot of low level control over the service fabric runtime and the platform so you can use a lot of the service fabric primitives and it's a very powerful thing to do but it does tie you down to the platform and it is a little more difficult to use than just writing a plain ASP.NET core application so what we did is we created this resource-based format where now you're very loosely coupled you're not tied into the runtime lifecycle at all and you can of course use any language at all because there are no base classes running the link that to run and so one of the motivations for doing this and I'll skip through here real quick because we saw a lot of this one of the motivations for doing this is because we now have multiple environments for service fabric where you can run your applications so we've had the service fabric clusters in Azure that we showed you today as well as the standalone service fabric where you can run anywhere including a dev machine which is also what I was using but then there's this new environment that is out in preview now which is service fabric mesh and the service fabric mesh environment is a completely fully managed serverless platform for running applications. This environment is a multi-tenant environment everything runs in containers and because it's a multi-tenant environment it's a little more restrictive than if you were to have your own dedicated cluster and so what we did is we designed this new kind of resource model around that environment and set this up so that it's container-based and universal so once you write an application using these resource files you can deploy that to anywhere where service fabric runs any one of these environments will be able to run those applications so they're extremely portable and you can even run them outside of these environments and so the service fabric mesh environment this is of course only available in Azure that's an Azure exclusive and as I said this is the fully managed serverless one and so in this case there's no cluster management to do you saw the service fabric explorer that we were kind of showing us that on our Explorer. Yeah, it wouldn't be there. Yeah, you wouldn't see that you don't see virtual machines you don't see the nodes you don't actually have to worry about managing any of that infrastructure all that is taken care of for you underneath the covers you just focus entirely on the application specifics just writing your application and deploying those containerized applications up into that environment described by those very, very simple very simple YAML or JSON files and so the goal of that is really to simplify application development and to simplify the operational development of managing those applications and so even the logs that we saw would be drastically simplified Yeah, you wouldn't see any of the you wouldn't get quite as much detail and information there as we saw you would just see like the application logs and container logs and yeah, it would be drastically simpler How's that upgrade doing? Still going, still going still going and any logs? Yeah, one cause that service service mesh is in preview today and yeah, yeah, yeah, yeah. All right, so I think with that we'll probably wrap it up here. If you want to check out this demo I'll be posting this up to my get up account fairly soon that's up at the top here just making a note of that. If you want to start playing around with service fabric mesh that's also available that's in preview just go on to our GitHub repo here at azure slash service fabric mesh preview. It's a public preview you can get started right away there's some cool demos and samples that you can go and run and deploy them out into the mesh environment. Of course download the service fabric SDK if you just want to do service fabric development in general and of course visit us on get up service fabric is open source and we're continuing that open source effort and moving all of our development processes and everything out onto GitHub. Please come visit us there at Microsoft slash service fabric and open up issues and play around with the code of it. Awesome, cool. Yeah, thanks a lot guys. All right, thanks for watching. Thank you. Hey, thank you guys, that's awesome. Thank you, all right. So I'm Beth Massey the dotnet product marketing manager as well as the executive producer of the show and I'm Cameron Thompson and I've been the director for dotnet.com so far. So we just wanted to say thank you so much for watching the first two days here in channel nine studios. I wanted to give you a little bit of stats. The longest watch time Bermuda comes in at 621 minutes. You guys are watching this show like consistency 621 was Barbados 204 minutes at number two. Of all places. I mean, geez, I would think we'd be on the beach or something and not anyways. Most views came from the United States then the UK, Canada and India. Thank you guys, we love you. Our local events. We had 40 watch parties yesterday watching the keynote live. That was amazing. Check out the dotnet.com hashtag. We have a ton of pictures and people having a great time yesterday. But go ahead and go to dotnet.com.net local events. There are 151 total events and they're all running through October 31st. You can attend a live event and learn more about dotnet. And we've got more coming up on Twitch. We've got a whole day three going, right? Oh yeah. So before we get into day three, I just wanted to give a great shout out to our crew here at channel nine. It's been awesome to work with everyone. Michael and Neil has graciously been super helpful. Matthew Pugh on the movie who's the guy behind the camera right now. We've got Caitlin doing all of our backend stuff and helping us out. People like Christiana, big help, Jof, thank you all. But the party isn't over yet. It's not over. We still have a bunch of content. 24 hours, dude. We're going all nighter in the back over here in Studio A. We don't end until 24 hours from now. From now, 5 p.m. tomorrow. So yeah, we're doing this so that all of you guys around the world can not have to miss anything. You can watch in your local time zone, some live shows. We're gonna kick it off with Troy Hunt in Australia next. Yeah, it's gonna be great. Jeff Ritz has been kind of the mastermind behind all this. Yeah, so Jeff Ritz and Javier are back there. They're ready for you. So thanks again for watching in DonutConf. Amazing show, everybody. Thank you. Thank you, guys. Bye.