 Okay. Hello. Welcome to the state of .NET. Contrary to popular belief, unfortunately, I am not Scott Hunter. My name is Glenn Condren. Scott wanted to be here today. He is the director of program management for .NET at Microsoft. Unfortunately, he had an emergency at the last minute. He was in the airport, ready to come here, and he had to leave. And so I am going to do my best to fill his shoes and present in his stead. Hopefully, we'll be able to do it as well as he would have. So that is my name and my title. Let me plug this in. Because not plugging in a laptop is always a terrible idea, regardless of how long your talk is going to be. So I believe I was told that this has to be somewhere in every slide deck every day of every presentation that you're ever going to see. Please be aware that there are fire exits and walk humbly and all of those things. Now, let's talk about .NET. So .NET, what we're trying to, what we want .NET to be is, and what it is, is a platform to build anything. Desktop applications, web applications, cloud, mobile, gaming, IoT, AI, like all of these workloads are things that we care about, are things that you can do and things that people have been doing. We've been making investments for years, lots of years, and there's some slides later that talk about how long .NET's actually been doing this. And what we've been trying to build is a unified platform designed to allow you to write code and similar code, idiomatic code, to do any of those workloads that you care about. So that means that up the top here in this diagram, you have each of the workloads, the application programming models, and then you have a layer of class libraries, net standard class libraries. So net standard is new, I'm going to talk a little bit about it, hopefully make sure everybody understands what it is, but imagine net standard as the way that you get the same JSON parser on your IoT device as you do on your cloud application. So the shortest understanding of net standard is if you're making a class library, make it net standard. Pick one. And you pick the one that lets you run wherever you want. We'll talk about it a bit more in a minute. Then underpinning all of those libraries is a bunch of infrastructure. So you get the same languages across all of this stack, C-sharp, F-sharp, BB, as well as the 20-odd other languages that can compile to YL theoretically. You have a common set of compiler infrastructure and runtime components. And then on the side, you have a common tools. You can use Visual Studio for all of these things. You can use Visual Studio for Mac. You can use Visual Studio for code. You can use the CLI. Try to be a unified, one unified world is what we're trying to get to with varying degrees of success for those of you who have been following us for a while. So net standard tries to allow you to share code, but also share binaries, like literally the same thing across all of these workloads that we've cared about. It's a formal specification of .NET APIs. How many of you are developers or at least would have identified as a developer before you evolved into your current glorious managerial form? Fantastic. Pretty much everybody. So the great analogy, the easiest analogy for the .NET standard is that it's an interface. It is an interface for a set of APIs that exist in .NET. And there are various .NET implementations that can implement that interface. So .NET Core implements some versions of the .NET standard interface, .NET framework implements some of them, Mono implements some of them, and so on and so forth. And so what you do with net standard, there is a document online. If you search for net standard in docs.microsoft.com, there's a big table. It has versions of net standard across the top. It has all of the different kind of runtimes down one side. You try and pick which box you want to fall in, pick which platforms you want to try and support, and you pick a net standard version that works. And then you target it. And that will control what surface area you see in Visual Studio when you're trying to write code. You won't see APIs that don't work on one of the platforms you want to target. And then if you do really need an API and you know it exists, well, then it's when you'll need to start writing platform-specific part forks of your code. So that's how we're trying to let you write a single class library, a single binary to run across any workload that you want. If you were trying to write up YAML parser, because it's not a cloud application unless it does YAML, then you want to try and target a net standard version that works on all the platforms you care about for cloud. Or maybe your YAML parser, maybe you believe that YAML should be like the only configuration file format that exists anywhere in the world, in which case you wanted to work everywhere in all of those. You want people to say that AI is an AI unless it's got some YAML. Right? Makes sense? Cool. So let's talk about the APIs in net standard 2.0. There's a few versions of the net standard. We basically retroactively versioned the net standard out. So when we shipped net standard as a concept, there were several versions of it already representing all the APIs in the history of net. Then we shipped this 2.0.1, which is a new one, which had far more service area than the previous versions. So these are kind of some categories and types of APIs, data, XML, serialization, networking, all stuff that you've probably used or thought about in the past. And there's some interesting things in here because the stuff in here that we never had in the earlier versions of net standard that caused people that were porting code to some pain. So for those of you who have tried to port, how many of you have tried to port full framework applications over to .NET Core at some point? So a few of you. Yeah. And did you hit APIs missing? It was a pre-2.0 and you hit, like, my code calls XAPI and it doesn't exist and I don't know what to do and I can't do this. There's a few people nodding. With net standard 2.0, there's 20,000 more APIs than there were in the previous versions of net standard. Significantly easier to take current .NET framework code and have it run on and have it become net standard compliant. We actually downloaded every NuGet package on NuGet and opened them and scanned the assembly and looked at all of the APIs that they called and checked whether the APIs that they were calling was in net standard. 70% of them were. So 70% of all things on NuGet corg will just work with net standard 2.0. The rest of them have some APIs that we haven't yet made part of the standard for whatever reasons. Please don't write the code that downloads all of NuGet. It's very, very demanding on NuGet servers that we found out. So this was our diagram earlier. This is the world. This is all of your .NET, all of this unified platform. Let's talk, let's focus a little bit on to the .NET core part of this again. So .NET core is cross-platform open source implementation of .NET. So it implements versions of the .NET standard that we just talked about. And is suited and kind of targeted at cloud native cross-platform IoT AI. However, the very beginning for those of you who have maybe been following on with .NET core for a long time, it really kind of started out with say web and cloud. Like it was very targeted. And we've been slowly kind of building out to more and more of these workloads up the top as we've been progressing and working and doing more things and things become important. But it is built upon the same set of common libraries as everybody else, the same run time components, the same compilers, the same languages. And it has been growing. So this slide is a little bit old, I believe, since we shipped, since we shipped, as of February 2018, this is, I'm looking at my notes again, with just the web workload of .NET core. So just people making the web parts of it. There was half a million active .NET core developers, people making web apps with .NET core. And there is double figure percentage month-over-month growth since we shipped 2.0 in August. So it grows by more than 10% every month, that number. I think the last time I looked at this data, it was in the mid 700,000s. And we expect we'll hit, we'll be more than a million individual developers using .NET core before the end of the year. The active here means you're doing it more than once. You have to have gone and created a project and then gone and opened it again later, not just created a project and then deleted it. That doesn't count in this number. The number gets much bigger if you count everybody who's just created a project, like it's millions. So we're fairly confident that, and at the same time as this, we're talking about .NET core here, but I believe the .NET framework numbers are kind of growing as well, not at such a large rate, but .NET in general is growing, and .NET core very, very fast. And part of that reason may be because it is very, very fast. So this is data collected from our tech and power benchmarks. It shows the plain text, these numbers I believe are specifically from the plain text benchmark, which is how fast can the server return text from a request effectively? What is the optimal thing you can write to get text back? It's not even doing anything. There's no data. There's nothing. This is just what overhead does the minimal amount of your framework give you? Because the whole lower this number is, the less room you've got. Once you put data access in here, this thing like tanks for everybody because now you're doing network calls every request and things like that. So the last round we had, this is roughly where we're at. We're at 1.71 million requests per second on ASP.NET core. The node sample is 0.43 million. Java servlet is 0.96. We're not the fastest in here. There's a few things that are faster than us. But this example here from Raygun is kind of telling, right? So with the same size server, they were able to go from 1,000 requests a second per node with no JS to 20,000 requests per second with .NET core with the same set of infrastructure. And you can go check out, if you go check out the .NET, you can go check out the tech and power benchmarks and look at all this data. And we believe that with .NET core 2.1, we're probably going to be about 15% faster again. In fact, these nodes come from, coming from Scott Hunter in all caps, says we will be the fastest mainstream web stack on the planet. And that is the kind of motivating factor that the mandate that has been coming down. Like we want to be the fastest real stack that you can use, right? There's always going to be like someone's, like, toy that no one really uses, that is super fast and written in assembly or something. And maybe they may, they may always exist. We want to be the real, a real framework that people really use for real things today. We want to be the fastest one of those. That's what we're trying to do. And we'll see there'll be a fair amount of stuff in this deck that we're talking about, about how successful we are at that and the sorts of things we're doing. So in this slide, we're going to, we're talking about a bunch of the work that we have. One of the interesting things, actually, with that tech and power benchmark, we had a, the tech and power team, so there's this quote here, which said we're one of the most interesting web development platforms from the tech and power team. It's part of that statement, they said in round 11, the way tech and power works is they do a round and then some months later they'll do another round. Everybody updates all their stuff to their latest thing, right? So there's rounds. In round 11, they were on Linux with Mono.NET framework, then ASP.NET could do 2,120 requests a second. By round 13, with ASP.NET Core, we did 1.8 million requests per second, which is about an 85,900% increase in request per second performance improvement, right? 859 times, kind of roughly. And we do a lot of work to make that happen. So here's some examples of some of the stuff we're doing. In Core FX, we now have, this is 2.1 specific stuff in gray, so this is on top of that quote that I just talked about. We have a new ways of getting safer, faster memory access with span of T and memory of T and friends, right? These are ways to do allocation less, like less allocation operations over data. So imagine the example I had to give here. Imagine the data comes in, your request data comes in, right? We're going to go make a HGP context, like a type representing this request. Every time we like make a string, there's an allocation, right? That's using some amount of memory. If we make 50 strings, there's 50 strings worth of memory. If we make 50 strings and then they go away after your short request, eventually the garbage collector has to come along and clean those up. Every time the garbage collector runs, everything stops. If you're going to do 2 million requests a second, then a millisecond of time in garbage collection is actually a significant number of requests that you can no longer handle. So we, at our layer of the stack, are trying to remove as many allocations as possible. So we don't use up as much as we use as little memory as possible to avoid garbage collection, to avoid like large processes running, to make which internal also makes us significantly faster from a request per second standpoint, which then gives you a really big ceiling to be able to allocate to your heart's content, right? And it's okay. There's a lot of JIT improvements and there is this profile guided optimization which lets you use better optimized native code. So there's a blog post. Let me talk about some of this. There's a great blog post you guys should go look at. It's here at onthe.net blog if you search for performance in .net core 2.1. So this is an example. This is a sample application that makes lots of outgoing HTTP requests. All right? I can't see. Oh, I'm sorry. I wonder why you can't see. Because it's extended. That seems suboptimal. All right. So this is showing outgoing HTTP. So in .net core 2.2, this makes lots and lots of outgoing HTTP calls is what this benchmark is doing. Right? So in .net core 2.0, this, I can't remember how many requests he's making, like thousands, several, hundreds of concurrent requests. So on the in .net core 2.0, this took on average a couple of hundred milliseconds. There was 1200 and like 1,250 Gen 0 garbage collections and 312 Gen 1 garbage collections. Right? On, sorry, the Gen 0 and Gen 1 size. Right? In 2.1, that drops to 17 milliseconds. Right? For the same amount of outgoing HTTP. Right? And there is no Gen 1. Like none of the memory moved over to Gen 1, which will take longer before it gets garbage collected. Right? There's a bunch of, if you start going through here, these are just massive amounts of microbenchmarks. Right? String.equals got faster and allocates less. Stringbuilder got faster and allocates less. We generate better assembly code from various different ways of doing, from various examples. Any of you who want to like geek out and say like, hey, this stuff is really cool and look at all these like nanosecond improvements across the entire stack, this is a great blog post to go and read and think about. And a lot of these, like a lot of the jit improvements, for example, you kind of get pretty much wherever you are. Right? Like a lot of those improvements you get across that unified stack we talked about, all the parts that are common, you get a lot of these improvements and you'll just start happening. Right? String.equals will just suddenly be faster in the future. Right? And a lot of that is because we, some of that at least, is because we introduced these new like span memory of t things and then we build all of our, all of our, modify our own APIs to then use them. So they allocate less memory. So they cause less garbage collection. So you get more speed. Right? Or, you know, we get the jit to make more efficient, to generate more efficient code from yours. And then it becomes faster. Right? Like, lots and lots of various ways that blog post details them all. It's kind of amazing. So these are our themes for Donet Core 2.1. Better build performance. We want to close just gaps in ASPnet Core and EF. Just make them better. Things that we should have done, that we haven't done. Obvious stuff. Proves some compatibility with Donet Framework. We have some GDPR and security work. There's some microservices in Azure that kind of focus the features. And better, basically, faster internal engineering system is just like, we had to do stuff to make our C but build better to be able to get you builds faster, things like that. It's internal kind of tasks work. There's a link here to the previews. Have any of you tried Donet Core 2.1 previews? Couple of you? Only, yeah. Great. Try the preview for me. Tell me if it works. More importantly, if you try the preview and you cannot get it to work for some reason or you hit some point where you're like, I hate you, Microsoft. Like, this is too hard. Go make an issue for that, too. Put it on the home repo. Tell us about it. Our previews are only valuable if you people are trying them and telling us what we're doing right and telling us what we're doing wrong. If it gets to an RC and an RTM and then you try it and you go, well, this API, this new API is done. It doesn't do what I want. And we're like, oh, yeah. We hadn't thought about that. But it's too late now. Or something like that, right? The easiest and best way you can make sure that all of our new stuff is suitable for you is to try it early and tell us and let us try and respond to that feedback. We need as much of it as we can get. Talk to us more. If you can. So let's talk about building tooling and to build improvement. This is the purple big bar is Donet Core 2.0. The blue bar is the 2.1 preview and the green bar is what we think it'll be in 2.1 RTM. The web small is just like a fairly small, like new, like toy web application, just file new. Web large is trying to simulate a big, real kind of web application doing lots of stuff. That 69.9 is in, I believe this is in seconds, yeah, time in seconds. So the web large in 2.0 took 70 seconds to build when there was no, it's incremental build. So presumably this is build with no changes made, right? And then down to 22.5 in preview one and then down to 6.8 in green. Faster and faster and faster, as fast as we can make it. Then here is kind of a big laundry list of stuff, things that we're doing. Some of this stuff is really important. Span of T and memory of T we kind of talked about. There's tensor of T, starting to build into build AI concepts into the framework. We have Windows compatibility packs where it's like, oh, compatibility packs are basically packages you can install that give you more of the APIs you had on desktop, on .NET framework. They'll only typically only work on Windows. So it lets you like take a .NET core app, say I'm okay with it only working on Windows because I need these APIs to port. I'm just going to install these compatibility packs and get them so that I can keep working, right? They're not part of core core because core has to be cross platform, right? Like you, those things have to happen. Sockets and high performance networking in Kestrel. So Kestrel historically, Kestrel is our web server, the thing that accepts requests and does its job. It was built on LibUV. We're doing a lot of work in getting it to work on Sockets and actually getting faster when using Sockets than what it was using LibUV. So that'll be a thing. HTTP client, is that outgoing HTTP that we showed earlier where it was significantly faster? That's roughly 10 times faster to make HTTP calls from your application to some other HTTP endpoint. And what we did there is HTTP client is effectively a thin wrapper over native code like HTTP sys on Windows and Lib Curl on Linux. And we replaced that with all C sharp and calling just system APIs, like just wrote it in .NET. And it's now 10 times faster than what it was when it was provoking the HTTP sys and Lib Curl. Which is great. And then we did some more crypto work. We're going to be starting to do NuGet package signing. We did some program work for NuGet package signing. Minor version roll forward is interesting. This will be like the ability to if, if you install 2.1 on a server and then later on remove 2.0, the 2.0 apps will start working on 2.1. If there's no 2.0, if you leave 2.0, it'll stay running on 2.0. So it gives you an ability kind of as an ops level to just kind of force people to roll forward if it's going to work without necessarily redeploying. People like Azure App Service care about that a lot because they want to kind of keep the versions installed on the machine low whilst still not breaking applications ideally, right? So once they get to the end of support of 2.0, they can install 2.1. Applications won't necessarily immediately break, at least not every application in the world, which is what would have happened. And this framework concept for ASP.NET Core in the SDK, historically we always ship ASP.NET Core and .NET Core as packages on NuGet.org. You pick the packages you want. It's a super compelling story, right? It's like if I want XML, let's add the XML thing. If I want like YAML, I'll add the YAML thing. If I want this thing, I want that thing. It's great. You pick what you want. You only pay for what you want. Everything's amazing, except when you have to reason about the 3,000 packages to make up the .NET ecosystem and decide which ones you want. At that point, it's just to become a little bit overwhelming for most people. So what we're trying to end, it's super slow to download them, right? Talking to my friends back in Australia, Australia Internet, you may not know this. Australian Internet is implemented by printing out TCP packets, taping them to the back of turtles and then sending them to America. Although I think recently they upgraded, so now they use sharks to make sure the Internet in Australia is as dangerous as everything else. But when you're in a country like that and you need to do a package restore and download a few hundred packages, it can take a really long time and it just significantly impacts your experience. So we have been for several versions trying to get to the point of like giving you what we know you probably want all the time without you having to think about it and also trying to give you choice to like choose what you want to get paid for play and playing around with these ideas to try and get you to that point. Because ideally what you kind of want to do is say, I want 2.1, go and then make it like as optimized as you can. Don't do stuff, don't do pull in stuff that I'm not going to use, things like that, that's what we're trying to get to by making ASP.NET and more of an SDK, a shared framework. It'll just be installed when you install .NET, it'll be optimized for the machine that it's on and away you go. As well as giving you the option of doing things like standalone where you carry the CLR with you and picking individual packages and letting you have all the choices and making this nice happy path where most of the time you just say, I want .NET 2.1, make it fast, make it good, go. I don't care. 100 megabytes doesn't worry me, I'll install it. All right. And then there's a bunch of .NET CLI improvements, inner loop improvements we started to talk about. Global tools is a very cool feature. So this is, for any of you who've used Node, how many of you use Node? Probably a few of you I assume. This is kind of npm install dash g for .NET. So you can build a .NET core command console application, you can publish it to NuGet, and then you can do .NET install and then it'll be on your command line, on your path and you can go run it. Right, that's what this kind of is. Let's us ship kind of more tools faster, let's you ship more tools faster, it's kind of good. And standalone app servicing is interesting. How many of you are familiar with standalone as a concept in .NET Core? A few of you? A couple, like one or two. Okay, standalone .NET Core applications that you take the CLR, absolutely everything you need to run an application, put it into the deployant, so when you go publish, you get a folder, it has the CLR in it. It has everything you need to run your application within that folder with nothing installed on the machine as long as you're on a version of Windows or Linux that's supported by the actual runtime. It puts everything in that directory and then you run and it's isolated forever. That's great. Everybody thinks that's great but then you deploy that to a server and you're like, well it would be nice though now I've got 10 standalone applications, kind of nice to roll them all over to 2.0.x because there's a security vulnerability I want and then we're like, well let's standalone go redeploy them all. Right, that's what you asked for. So now we're trying to build in support for those people to say, okay, yeah, we did kind of want standalone but we also kind of want to patch them all at the same time. Can we kind of like have both? And so that's what we'll try and do. Right. And then ASP.NET Core 2.1, we've been working on Signala. We make it by default when you do File New for an ASP.NET Core application, it now sets up HTTPS on your dev machine and trust asserts so it's like doing HTTPS doing HTTPS. You can turn it off when you do File New. GDPR compliance, how many of you have to deal with GDPR compliance in the recent history? A few of you. If you do File New in ASP.NET Core now, it automatically has, it has the banner saying, hey, we're going to use cookies, click okay. It has the configuration for you to be able to choose the choose when, who you show that to and how to configure that. And you let you like customize the UI to fit. So it's just kind of preset up for you to be able to handle the GDPR compliance. For those of you that don't know anything about it, well done. Basically, it's the banner you've probably seen a bunch of times saying, hey, we're using cookies, is that okay? Right? We've made a bunch of better, a lot of improvements for web API conventions. It's got a sense of an API controller to get better kind of default API experiences. There's HTTPS Client Factory. Ryan and I are going to talk about HTTPS Client Factory a fair bit later on in our ASP.NET Core deep dive. I think it's on Friday. There's a lot of good microservice connection resiliency stuff you can get with there over HTTPS. And then if any of you are using IAS, we're working on a new in-proc hosting model for IAS, where instead of today, IAS is just a proxy and it just has ASP.NET Core runs as its own process and it just forwards requests. We're going to have a mode where you can either run that way or run in-proc. And when you're in-proc, you get roughly like six times the perfect improvement because you're not doing that like forwarding across across prox. And then razor improvements, NPC improvements, build-time razor improvements, UI as a library. If any of you have gone and opened up the authentication controller code in your solution and then just go, my god, my eyes, the goggles, they do nothing, we'll just move that into a class library. It'll be fine. That's actually the thing here is you can put your razor UI into a package, add that package to your application then have it appear. Right? And then kind of customize it. So we used it for off. So you just add the off packages and then you just get a login screen because all the login screen looks the same everywhere, right? You get some customizations and tweak it, but otherwise it's always an off screen. If you don't want it, great, don't use it. But if your thing kind of looks like what the one in our package is, great, you never have to think about it again. You just add a package graph and away you go. If any of you people are in charge of like owning the like X UI for all of the applications in your organization that do the thing, that's probably something that you want to check out. Right? If you're in charge with owning the footer or whatever, right? And then we're porting the webhooks libraries that exist today over to core to make sure they all work and have a good idiomatic ASPIN at core experience. And then EF does a bunch of stuff. I worked on the EF team. I don't even know what half of these things are but they sound great. Lazy loading is good. So lazy loading is where you say like get me all users and then when you start trying to get the I don't know something that users have addresses, I don't know. That was a bad analogy. When you start dotting through your domain model when you've grabbed your entity back from entity framework, entity frameworks in RM if you haven't used it, it'll just automatically start executing queries to go fetch them on demand. Previous to now you've had to explicitly say which ones you wanted when you were doing the initial query. Some people hate that with the passion of a thousand suns. Some people like it and say that lazy loading is the worst thing in the world. So pick your camp. Link group by translation is interesting. The way AF core works is it takes your link expression tree, translates it to SQL or whatever the language is you're trying to do executes it on the server. Then anything that come the query will come back and any of the link that it couldn't translate to SQL, it'll run in in line. So being able to translate group buys into the language of the server means that you get more efficient group buys. Then over time they do more and more translation to the server things get more efficient but you never see you never care because you're just doing a link query and it just works. The last one there is super interesting I'm pretty sure I told someone last night that that wasn't a thing yet and I was totally a dirty run liar which is using any framework to talk to Cosmos DB. It's great. I think it might have been you. So you should check that out if you're interested in using Cosmos DB against like a domain model. And now as is a pretty common theme with all of our with all of the talks at Cloud Foundry and at Cloud Foundry Comp contributions. So these are all of the places in the world where we have accepted contributions from the community. It's pretty good. Thousands of people from all over the world. Samsung is embracing Donet because it is a completely open source project. Ben from Illyriaad Games. If you have ever seen some of the Perf... If you've ever tried to watch Perf check-ins coming into CoreFX you've probably seen Ben's name floating around. He's building this building a game with Illyriaad Games. He's using .NET Core and so every time he finds anything that's remotely slow he goes and fixes it and he sends us a PR and we check it in and it makes things faster and it's great. So because .NET is completely open source projects there's more than 19,000 contributors from 3,700 companies all contributing to .NET Core and our like .NET Core kind of ecosystem of open source projects. More than half over half of the configurations are coming from outside of Microsoft I'm just checking some of my notes here. And I think in this case contributor means you doesn't necessarily mean you sent code it may just be participating in GitHub issues like just going and making an issue making a good bug report is amazing. So I worked on entity framework 6 which was the first version of full entity framework that was open source. Lots of people internally were kind of looking at us it was one of the very early things we did kind of as open source. And so I got the question a fair few times what is this biggest difference once you're now open? So I personally never worked on I've never worked on closed source software in my like seven years of working at Microsoft I've only ever worked on open source. And the answer at the time was our bug reports are now 300 times better because everybody goes and looks before they submit a bug report. So now our bugs are amazing. We have these bugs that say like this line of code is broken and we're like yeah it does look broken. Thanks. Right. As opposed to like hey I click this button and like this thing moves and we're like okay let's go find out why that is. Right. So this so contributing by just making good issues is still contributing. Right. You don't have to be sending us pull requests and building our code and doing stuff. Just talk to us. Tell us about your scenarios. Tell us that the previews don't work. Tell us that we don't know how to write a Linux installer and you guys are dumb. Whatever. Right. Tell us about your crazy like ridiculous legacy thing cobalt that you're trying to integrate with. Right. Sounds great. Okay. So continuing the open source theme. There was a there was a the Jim Zemlin the executive director of Linux foundation. There are tens of millions of open source projects invest in the ones with a sustainable ecosystem and Donat is one of those. This is from 2017. The cloud native foundation released an analysis an analysis of the top 30 highest velocity open source projects today. And the one with the arrow pointing at it is Donat. All right. The ones up the top right are like Linux kernel and Chromium and stuff like that. Kubernetes is that green bubble up there. The number of authors is the size of the bubble. The further it is to the right the more commits it has. The further up it is is the number of like PRs and issues and stuff. So then further up that way you go the more kind of healthy you like the stuff happening as commits there's PRs there's issues. It's basically the way that you read that that flow. Cloud Foundry is over here has like way more commits not as many issues and PRs. Things like that. Makes sense. Cool. So let's talk about Donat open source journey way back in the beginnings in the before times in 2001 there was ECMA 335 were any of you involved in Donat at that time? Yeah that guy. Yeah. Okay. So a bunch of you guys are old old school Donat guys. Right. It's great. Then we released so ECMA was kind of the standard for you know the common language infrastructure the thing that all the Donat languages compile to and then we have Donat 1.0 for Windows released in 2002 about the same time the mono project began as an open source project which was the ECMA spec said that the CLI should be able to run on other operating systems and infrastructure and ships like we work with Intel on it and people like that but we only shipped one for Windows because we were Microsoft at the time. Right. And so mono so mono was started to try and bring that to other places. Around 2008 ASP.net MVC web development stack released on codeplex as open source. It was kind of a big deal at the time. I want to say that this 2008 date is when we were still kind of doing over the wall open source. It was like the source was there but it was like copied and pasted from an internal repository. We weren't actually developing in the open. We were just like making the source available to like browse and commit really easy issues from and then we kind of evolved over time to being from that into actually committed developing in the open. So basically between these 2008 and 2014 dates I want to say that's true and then AFP came open source somewhere in there and various other things started to happen. The Domino started to fall I guess you could say and then in 2014 my notes here say that in 2014 hell freeze it over and pigs started to fly and Microsoft's at Microsoft's build conference and is the father of C sharp released Roslyn the open source C sharp compiler C sharp BB like don't net compiler called Roslyn then in November don't net core project begins in the open. All right. Don't net community starts to get excited. Don't net cause cloud native. It's fast. It's open. It's all these cool things that I've wanted for so long. All right. You can start saying things like hyperscale because it was cool to say. And then in 2016 mono came home and we gave Miguel a big hug and we bought Xamarin and Miguel joined us a dev div and is officially supported and contributed to by Microsoft and the Microsoft community kind of the mono community all came together and it was glorious and then it also at the same time mono joined the dot net foundation and in August 2017 don't net core two was released and it was after August 2017 where we started to see that more than 10% growth every month of people who were using dot net core every day like more than a few times more than once and twice. All right. And what is this dot net foundation thing? Well, the dot net foundation is our center of gravity for all of our open source and ecosystem of dot net. All right. It provides support. If you want to if you're going to go make an open source dot net project and you want to get like a cert to sign the assembler to like authentic code sign the assemblies and make sure you're doing new get rights and all those sorts of things make sure your guidance for your open source project is appropriate. The net foundation will help you with that and with all those things. Right. It has more than 60 projects hundreds of repositories under its stewardship. It provides protection support services best practices all of those things spreading of knowledge introductions are the members of the dot net foundation and it includes a bunch of really cool things like if you notice at the very top there's a little white piece of text at the very top of that slide that says steel toe. Right. So like steel toe and so is everybody here who here is is not familiar with steel toe let's try it that way does anybody not know where steel toe is excellent for very well done for being honest and telling me you don't know what steel toe is there's a bunch of other people that got no idea but they're like I'm not putting my hand up I'm not going to be first so steel toe is about taking your dot net core and your dot net application and giving you the import integration with you know het hysterics with spring cloud config server with Eureka with giving you distributed distributed tracing with open census there's the M&M talk later on today later on with with David Tillman he's going to be showing open census and distributed tracing and you know this gives you it gives you configurable health end points like all actuators in in spring boot right it's it gives you if you're going to take your ASP.net application and you're going to put it on cloud foundry add steel toe then go deploy it and your experience will be better right gives you all of the integration points and all of the implements all of the things that you would want right and then dot net on cloud foundry dot net works pretty damn good on cloud foundry even without steel toe you go file new you do cf push it'll run out of the box right the dot net core build pack is in by default I'm pretty sure I've never had to install one except when I was trying to do crazy preview stuff and I had to build it myself and it automatically containerizes your workloads you can ssh into like that app over there that one that's read for some reason right and you use the same debugging and tools once again we had that slide at the beginning that unified stack right common tooling common libraries common things right I want you to be able to have a familiar programming experience and a familiar tooling experience building any type of application that you want to build and deploy especially deploy to the cloud so you open up visual studio you start developing code it should feel the same if you're going to deploy to cloud foundry same if you're going to go somewhere else everything's the same right in the far future you know if we start doing other things like g like grpc and things like that same thing we want you to be able to write a dot net application that is a dot net application that feels like a dot net application deploy it to cloud foundry deploy it to anywhere and it feels the same and it feels good it integrates well that's my goal as the p.m. in charge of services in dot net right I want the best api in cloud native experience we can get and we've talked a lot about dot net core but this is also dot net framework web forms WCF things like that with the new windows stem cell cloud foundry stuff that you can do now you'll be able to deploy all of those as well and then for those of you who are building trying to move into the world of we have a lot of people especially in dot net lands who are who come to us and say for the love of god just tell me how to do a micro service please come on tell me tell me what it means tell me what micro means so five lines of code definitely right someone laughed thanks so the in this dot net architecture guides are is an ongoing set of work for us to say okay we'll tell you we'll tell you something we'll give you as much help as we can we'll give you at least we'll give you at least like some of the paths like here's path a it's fine path b it's fine path c it's fine you can pick one at least try and give you that and we try and give you a lot of guidance there's a lot of very common guidance right connection resiliency if you're building an architecture that has seven different processes all talking to each other you need to have resilient connections when talking to those things that pattern is common and the problems with retry logic the problems with exponential back off exponential back off with geodger in high volume like through by throughput scenarios right all these sorts of things is are we trying to provide more and more dedicated guidance including and best practices with like community reviewed and peer reviewed books for example this free ebook it's like it's 250 pages the amount of information it makes it seem like a thousand it's just it's there's lots of there's lots of pretty good stuff in there it's focused on on just docker containerized stuff so the samples are all like docker compose up and you've got a full like app running that you can experiment with but the vast majority of the information isn't really as with so many things it's not really about docker docker is kind of a packaging almost an implementation detail tool for a lot of extents intense and purposes a lot of what you're talking about is okay I've I've got my I've got my I've got my shopping cart in this example I have my I have my basket microservice and my ordering micros and my catalog microservice basket has a price catalog has a price they're both different concepts the price of the thing when I add it to my basket is different to the price of the thing when it's in my catalog but if it changes in my catalog and my UI in my shopping cart probably want to tell you that right like if you're in amazon if you go to the shopping cart it says hey last since you added this to your basket the price changed you'd implement that rule how do you implement that data is in two different services do you call both services from your UI does the does the does the cart service talk to the catalog service do you do like an eventing backplane message queue thing Kafka like what do you want to do this book these things talk about all those sorts of things some of you are going yeah man I have that problem that's great you should read this book or come talk to me or something it'll be great and this repo eshop on containers has an app we took like a shopping the shopping demo it's got three different UI stacks it has and it has four or five different services all the services it's all done that but they're all kind of implemented slightly differently some of them are doing like hardcore ddd others are kind of very simple crud one's got Redis one's got SQL server you know they use some message queues like they solve this in at least this is at least our way of solving a lot of the problems you have with microservices in.net call so go check that out if you're in that world and then another common common ask outside of the lack for the love of God tell me how to do a microservice is tell me so I really want to use .net call .net call is amazing like it's so fast I really want to try it I love it because because we did a good job of making developers love our stuff but you know my manager guy he's not so keen on betting on this new thing he's been around he doesn't believe you tell us who are all the cool people who are using this that's going to make my boss want to use .net call too so here's a bunch of cool links with customer use with customer stories people who are using .net call successfully in production at high at various levels of scale 10 cent jet.com like lots of lots some of these people running you know tens and hundreds of thousands of transactions stack overflow runs 5.7 million page views a day on around eight or nine servers using .net framework they haven't haven't moved to .net call yet just highly optimized .net framework like there's lots of good case studies there for you to go check out read about their success stories read about what they what they did if you're interested especially if you're like tossing up about whether you should try out .net call or true try out some .net stuff and you want to read about some other companies that have done the same thing go check these out it's a common question if you're thinking about it go and ask and that's about it for me 29 minutes and 50 seconds I am amazing because I had 30 minutes thank you very much for listening I'm going to be here all week myself and Ryan who both work on the .net server team he's waving right now come talk to us tell us how great or terrible we are and let us know how you're using our stuff if you're using it we always like to hear from you