 Hello, everybody. I'm Steve Agnibeni. I signed up today to give a talk about .NET Core 2.1 in production, but unfortunately that talk is canceled. So we're going to do .NET Core 2.2 in production. That's .1 higher. It should be backwards compatible with the previous talk according to December. So I'm Steve Agnibeni. I work for a company called Namely. I've been working with .NET since basically the beginning, SQL Server since the 90s, and I'm a Pluralsight author, and I have a couple courses about typescript out there. You can find me on Twitter at mic.net. So I work for a company called Namely, and Namely, our mission is to build a better workplace for mid-sized companies. We write HR software, human resources software for companies in the United States. Namely was founded with a Rails app. That forms the core of our main HR application today. We also have a payroll app, and that's ASP.NET web forms. So getting these two things to talk to each other is a bit of a challenge for us, and they are what we like to call in the software world, monoliths, which is to say that they're very self-contained, right? So talking a little bit about scaling for startups, you hear a lot on the internet about how monoliths are bad. Monoliths are bad, monoliths are really hard to scale, and that's true, and you hear a lot about, well, what you should have instead of monoliths is service-oriented architecture, where everything is its own little service. This is great, but it's very expensive and it's very complicated. It adds a lot of complexity. Monoliths tend to be very simple. They're small, they're very self-contained. There's a lot of problems you don't have when you have a monolith, and the only thing that you don't get for free is scaling. But I don't want to discourage anyone from using a monolith. What we found is that a really good way to start it is with a monolith, and then when you start having customers, which means you start having traffic, do a monolith with more hardware, throw hardware at the problem. This is the way that you advance a monolith, and it's actually important to do it this way because this means you actually have customers. The reason you're having scalability problems is because you have customers, and you could throw hardware at the problem, and that buys you time to start doing services. Start with a monolith, get customers, from customers, use money, from customers to build services. Planning services for Namely. In mid-2016, Namely, we decided, it seems like we have customers, we have a business model, this is actually really working for us. Let's think about what we want the future of our applications to look like. On the Namely side, we had Ruby on Rails, ran on Linux in Docker containers on top of Kubernetes. We're actually very early on with Kubernetes we guessed right, I think. So this is 2016 already, and those are running on top of AWS EC2 instances. On the Parallel side, we had C Sharp and VB.NET, like I said, it's a ASP.NET web forms app running on Windows and IIS, and on EC2 instances as well. As we started thinking like, where's our future? What do we want to do? What do we want the future to look like? In coordination with our SRE teams, on the Parallel side, we decided Linux and Docker and Kubernetes were good, but the thing we didn't want to do is convert all of our payroll code to Ruby. It didn't really feel right for the types of applications that we were writing, lots of financial calculations and things like that. Ruby to this day, I look at it and I think to myself, how could this ever work? But it works somehow. We have really good software, it's just not for me maybe. So we decided we wanted to have our future stack be whatever language the team that's writing the software wants to write it in, that's fine on a short list of approved languages, but that underneath that, they'd all be running on Linux in Docker on Kubernetes supported by EC2 and I'm looking at my watch here that doesn't exist because, I don't know, old people do this, right? They look at their wrist. So later on today, we're actually going to EKS which is Amazon's hosted Kubernetes and that's going to happen in like three hours. So it should be fun going to production with EKS and really, really soon. So we had a question. So how do we write C sharp on Linux? And this is a number of years ago. We had the choice basically of thinking about Mono because that was something that was out at that time. It did exist but we were kind of old school Microsoft developers. We loved, if it didn't come from Microsoft, we were still kind of scared of it, right? This is pretty early days and so we decided to go with.NET Core which had been released and was on I think 1.0 at that time. So we had our payroll web that talked to payroll DB and in February 2017, we had a hackathon at work and we created three brand new.NET Core web API projects and we called these services and they ran on.NET Core 1.1. They were in Linux and Docker which is great. We use the.NET new templates. So all of them had a kind of a react front end and web API back end talking to SQL Server and none of them won the hackathon. So that was kind of sad. But the jokes on, I think the thing that won was some kind of like machine learning AI blockchain thing. I'm not really sure. It's exciting for a hackathon but it's not, it wasn't for us, it's not business worthy. But the good thing is actually this is successful because all three of these services are still in production this day and we followed them through the whole tool change. So that was pretty exciting. So question though. I said services, are these really services? It's a, if you notice, it's not really independent, it's still kind of talking to the payroll DB. And so we had this idea that services are going to allow us to scale infinitely. But it turns out that they're all still talking to the same relational database and there's really only so big that you can scale a SQL Server before you start to continue to have kind of scalability problems and blocking problems and contention problems and all sorts of stuff like that. So in terms of services, I would say in fact these are not services even though we kind of use that term. What they really are is, I'm not sure if we invented that, I don't think we did, but it's, they're really BFFs which is not best friends forever. It is called back-end for front-end. So back-end for front-end is a captive back-end that is specifically tied to a front-end service like something like React or whatever on the client side. This implements all the server side calls that are appropriate for that front-end and then talks to the database and the back-end. So they're not services even though we kind of use that term. So talking about scale for startups, remember I said the idea is that you can throw hardware at it and that works for a while. If you're going to go with services, I highly recommend using an approach like we're using now which I'll talk about in a moment. But there's this sort of thing called the distributed monolith which is what we started building. We started building lots of BFFs, we're back-ends for front-ends, and you wind up with a distributed monolith, you basically are still talking to the same database or some shared things, and you wind up duplicating a lot of logic in a lot of different places, sometimes even in different languages, which can be really problematic. I like to say that the road from monolith to distributed monoliths is paved with BFFs. So you need to be very careful about not doing this sort of thing if you want to live in this glorious service-oriented Valhalla heaven type thing. So, dotnet core 2.1, pardon me, 1.1, it was pretty fast. Let's just develop on Mac Linux and Windows, which is kind of nice. It worked great in Docker and with Linux, and there was really good command line tooling. ASP.NET had something called middleware, which is really, really nice. What it lets you do is intercept the HTTP pipeline at various points throughout, and you can do things like, for example, we wrote a middleware which works with our authentication proxy that we have. Basically, if somebody comes in and doesn't have the right headers on their thing, we reject them. So the call never gets to the controller, which is really nice. We can decorate our controllers with various rules that are appropriate to our specific authentication requirements, and say like this particular controller requires this group and this group, or like this group or that group, that kind of thing. It's really nice. We implemented that with middleware and it's, I don't know, 100 lines of code and we use it all over the place. It's really, really nice. The people that are writing the end services don't have to think about it. They just decorate their methods and all of a sudden everything is secure, which is really nice. Of course, it's open source, right? That's the reason why we are all here, and .NET Core 1.1 was open source. Problem, lots of APIs are missing. We had to do crazy things like doing polyfills for .NET, right? That's something you would expect from JavaScript where there's not really a core framework, and in .NET it was just very, we were very allergic to it. We didn't really like doing that. The docs were wrong, and there was the whole project that JSON drama thing, and the tooling was bad, like Visual Studio was not ready for it. It was just problematic. So really we found that .NET Core 1.1, while we got it to work and while we shipped it to production was really for early adopters. So in August of 2017, .NET Core 2.0, yay, Visual Studio 15.3, yay, and .NET Standard 2, yay, okay. This is all really good stuff. This is when .NET Core really became decent. So the APIs that we needed were back. They added 20,000 APIs, the docs were better, the tooling was better, it was really fast and stable. I'm going to talk about this in a minute. We had a little thing with Kubernetes. It was not a big deal. We figured it out. The only thing actually we were kind of sad about with .NET Core 2.0 is that it didn't work right with the full framework until basically .471, which was three months later. Because that was delayed, it caused us to do all the work that we needed for that Standard 2.0, and then we kind of got bored with it and didn't actually port it to our monolith. So we then wanted to supporting essentially two versions of the same thing for like a year and a half or whatever, which was kind of a drag. But anyway, we survived. So when .NET Core 2.0 came out, there was a blog post by Steven Taub from Microsoft showing microbenchmarks, microbenchmarks, or benchmarks without context. So really tiny thing and so this particular one doesn't matter the code, but he's basically testing how fast NQing and DQing objects in a queue is, and he advertised dozens of speed-ups, between two and ten times. That's pretty exciting as part of your core framework. Now, the interesting thing though was not this blog post, or even that .NET Core 2.0 was so much faster. The interesting thing to me was this follow-up by a guy named Andre Ackenschen, who at the time worked for JetBrains, and he showed the same blog post basically, but using something called Benchmark.net. So this is a library that I recommend everybody check out and at least understand what it does, and I recommend just searching for the .NET Core 2.0 blog post on the performance. What Benchmark.net does is it abstracts away all the complexity of dealing with timing for microbenchmarks, like say for example, you're emailing somebody at the same time your benchmark is running. So your computer is doing work and so it's very easy to get things that are pretty measuring in the nanoseconds or the microseconds to be slightly wrong based on other stuff your computer is doing. So this Benchmark.net will run things a lot of times and it figures out just the right number of things to have a reasonable statistical significance of the benchmarks which is really, really good. So I highly recommend checking that out. But anyway, people were like, oh, Benchmark.net is cool. I bet I could use Benchmark.net too, and all the people on the .NET Core team started doing that, and we're going to see it made tremendous differences in 2.1. So this is actually a big cultural shift I think. So at Namely, we started working on something called .NET Helpers, and this is our common library for .NET services and applications. So it has abstractions for things like AutoFact. Anyone use AutoFact here? Okay. So very good. How do we use dependency injection generally for anything? So a very large number. So AutoFact is an implementation of dependency injection for .NET. It's the one that they probably copied when Microsoft introduced their own dependency injection thing. It's really complicated, and I love what it does for it but I hate working with it. The nice thing about AutoFact is it's totally abstract. You can extract it totally away. I'll give you an example. So in .NET Helpers, there's one guy at our whole company that knows how AutoFact works. So we had this one person write helpers for all the normal people like, pardon me, I touched the microphone, all the normal people like me to use. So like an example, in the beginning of all our programs, we have a menu of all the things the program needs, and it uses .NET Helpers, helper methods to abstract that away as an example. We're going to see here, all right, this application, this is in the beginning like in the startup, it's called by startup. I'm going to add a Postgres DB connection factory. I'm going to add a payroll DB connection factory. I'm going to add a reader and writer for the service database. I'm going to add a reader and writer for payroll DB, and that's it, one line to set all that stuff up. The nice thing about it is when you set all the stuff up in the beginning, and you have understanding of it, and by the way, each of these things are 100 percent covered like ridiculously covered by tests, and it means the application developer writing their service doesn't have to deal with all this low-level nonsense that they don't care about. That wasn't the laser. All this low-level nonsense that they don't care about, right? When I'm using it, I could just say, hey, I'm going to write a query class, and I want an ID database reader, and the one that I want is the payroll DB one, and that's it. I don't have to deal with connection strings. I don't have to remember to set my application name in the connection string, that's all done up here, so that the application name is in there, and so every .NET Core application we have sets that, so we know which person to blame when the database server is having problems. All that stuff is abstracted away, so we really like what Auto-Fact does for us, and we like using it because we don't have to deal with Auto-Fact directly, we just do it indirectly. We use Dapper, which is the Stack Overflow interface for iDatabase, the iDatabase interface, which we really like a lot. We do testing with .NET Helpers, which integrates some helpers for X-Unit, for In Substitute. We use Flume Distortions, which is a library that lets you do in your tests, like you do the Arrange and Act, and then the Assert phase is like, some variable should be five, and it lets you write out the assertions in a prose in English or whatever. We have all of our tooling integrated with this as well. So Logzio, which is our Elk Stack that we use, New Relic, which is a company that helps analyze performance and stuff like that. We use Datadog for graphs and charts and stuff, and Bugsnag, and all this comes to us for free, quote-unquote, when we use that helpers in our libraries. LN authentication, I talked about the middleware, and GRPC, which I'm going to talk about in a little bit. Anyone here use GRPC? Five-ish, six-ish. Okay. Cool. So with that standard .NET Helpers, it took us a while, but we finally got there. Now, my favorite kind of PR is we're deleting code from our monolith, because all that stuff is implemented in .NET Helpers, and even though it's written .NET Core, we can pilot to .NET Standard, and we can use it as a NuGet package in our ASPnet web forms thing, and it lets us delete code, which is great, because it's probably code that didn't have tests and stuff like that, and now it totally does, which is really nice. So, protobufs, okay. So, I've always said BFFs aren't really services, and the problem is that you deal with a concrete implementation that's meant for a specific front-end, and that can be problem because it's not flexible, and the minute you need to do a new thing or have something else that's not that front-end call it, you often have to add extra stuff to it, and it starts to get more complex. So, we use something called protobuf to help us write a language agnostic services, meaning that the client can be in whatever language, and the server can be in whatever language. Now, for our services, we write them all on C-sharp, but there's other teams that write their services using Go, using, you could theoretically use Python, you could use Ruby, there's a lot of choices. There's a dozen or something like that, languages that the Docker protoc supports, and the client and server code is generated for you. So, it's hard to see, but here's an example of a service. This is actual real code that we use in the proto format. It's not kind of its own language, but it's just for defining stuff. So, we have a namely PyCycles package defined that turns into a namespace in C-sharp, and then I have a service called PyCycles that gets created as a class, and there's a method on that class called rollbackPyCycle, and it takes a rollbackPyCycle request, and it returns empty, which basically means it'll throw or work. Then here's the message, which just describes what this looks like, and it's an object that has a UUID on it, and that's all. So, what we do is we run this container that we have, which just wraps the compiler, which is from Google, and we do Docker run. Here's my proto compiler thing and PyCycles that proto. So, we're pointing the command line tool at the proto file that we just wrote out, and what we get is this. In Visual Studio, we get a namely GRPC project that gets created and it gets compiled, and all the stuff necessary to be the clients of the service, or to implement the service, is set up for us automatically. Now, I got a question. Do you see on the side here, there's these sort of like read what prohibit signs? I don't know if it's the same sign in Europe or not. Anyone know what that means in Visual Studio? If you see that next to a file, it means it's not checked into source control. It's intentionally ignored in source control. Why would we want to ignore these files? This is the core of our services. Any ideas? They're generated. Who checks in their DLLs? The source control. Nobody. There are certainly some scenarios. Don't get me wrong. Maybe for external dependencies, that's the thing that people do. But in general, if you're writing some code, you don't check in the binary output from source control because it's not really relevant. Except for certain circumstances, you don't want to check it in because it's made for you at the time. So, we don't check these in. What we check in is the proto file, and during our CI and CD pipeline stuff, it generates the actual C-sharp code, or Java code, or Go code, or whatever language it's implemented in, and it makes these things for us. Here, I'll tell you a little thing. I know this is a TypeScript room. One of the things that we've added recently, we don't actually use it yet, but instead of doing C-sharp, let me just go back real quick. So, you have a language, I picked C-sharp. This is where the only thing you would do different to get a Go file, or Ruby, or Python, is change this last part to say literally Ruby, or Go, or Python, and it generates the code for you that implements that proto, which is pretty neat. So, now we have one called Web. Any guess what that does? TypeScript. TypeScript. It generates the JavaScript code to do the binary serialization and deserialization of the messages to the protobuf format, which is a binary format. So, it's more efficient on the wire than JSON, and you get a d.ts file, which describes the service and how to interact with it. That's awesome, right? It's good times because all of a sudden, you're dealing with a strongly typed interface to your thing, and then the implementation, you do whatever you want with. So, on the client side, there's one call. Now, again, we're not doing this yet, so I'm describing it as awesome because I'm not aware of all the problems it probably has, but anyway, we don't do this just yet. We use something called gRPC Gateway, so we make our clients by hand. But this is something that's out there and you get a d.ts, which describes getting a set and a serialized method and as object, and stuff like that. But again, there's that UUID. That's implemented as a string in JavaScript, right? So, that's kind of neat. So, in 2018, .NET Core 2.1 came out, which is pretty cool. There was another performance blog post this time based on benchmark.net, and again, they showed two and 10 times speedups on .NET Core 2. So, that's amazing. We're already pretty fast and we're not even faster for a lot of different stuff. So, the micro benchmarks are all over the place. That original blog post that came out after the .NET Core 2 blog post changed the culture on the .NET Core team and they started chasing little improvements all throughout the framework. It made a real big difference in how fast .NET Core is and we, as the users of .NET Core, all win based on that. What did I do? Oh my goodness. Oh, it's back. Okay, great. Panic. Yeah, so we saw a lot of tweets like this. Guess where I upgraded my site to .NET Core yesterday, ASP.NET Core, right? It's pretty amazing. The performance is really good. So, I wanted to show you a real graph, okay? So, this is something we got out at Datadog. I exported this last summer. And here is .NET Core 2.0, memory usage for a particular service around, I don't know, 90-ish megabytes used. .NET Core 2.1, around, I don't know, 50, something like that, 55, right? Pretty good. No code changes. Just going to .NET Core 2.1 from 2.0. Now, I have a problem, there's a problem with this, though. It's Photoshopped. Now, the funny thing is that with a sort of graph like this you would expect that maybe the Y-axis would be Photoshopped, right? Up and down. Y-axis is actually correct. It's the X-axis that's Photoshopped. And the reason is because we had to revert. We couldn't use .NET Core 2.1. And I got a little hint. You see this part? A little bit of a jump there, isn't it? Okay, so, we'll get to that in a moment. So, amazing performance, even better, right? Lower memory footprint, .NET Global Tools. So, who here uses NPM? All right, so NPM-G, right, is a global tool. This is the same thing, but for NuGet. You can deploy to nuget.org a package and then people can install it as a global tool with .NET Core 2.1, pretty cool. Has span of T, has all this other stuff, it's really good. Cons is bugs, okay? So, we ran into a lot of problems with .NET Core 2.1. So, the first is the New Relic Bad Image Format Exception. So, basically, there was a breaking change in the way that code, the code versioning profiler worked in .NET Core 2.1. This is me copying out of a blog post to try to speak about it intelligently as if I fully understand this. But the bottom line is that basically, the New Relic, which is a product that we use to analyze the performance of .NET Core, they assume the JIT would only happen once. And that was true for a given piece of code. That was true for basically forever until .NET Core 2.1 came out. And unfortunately, it was no longer true. And so, we would get these errors booting up the thing. It just didn't work at all. And so, basically, but here's the awesome thing about .NET Core and OpenSources, we all kind of appreciate. There's the bug, there's the fix. We could see what the fix was and this is when it was actually released. It was a couple of months later. So, they're still working on how quickly they release stuff. But that's pretty neat that we can see the bug report and get the actual fix and what change. So, that's pretty neat stuff. Problem was, we had a summer of fun. So, the .NET Core, there was a blog post that came out that said, hey, .NET Core is not gonna be supported as of September. Meanwhile, the fixes do mid-August, which is we know Microsoft always meets their deadlines for deploying stuff, right? Always. So, ah, panic, we're gonna have a week to do this at best. If they make their deadline, we're gonna have a week to fix and it's not gonna be supported. This is ridiculous. So, we complain on Twitter as we like to do and thankfully they listened and they released a new thing and they said, well, we'll give you an extra month. I like how it's like crossed out in the blog post. We'll give you an extra month. All right, so, let's see. We're gonna have to cut this a little bit short. So, there was a problem with performance. Remember how I said that thing was Photoshopped? So, here's our memory usage. It was going up a little bit and a little bit, this is not gonna work 2.1. Now we'll go back and re-reverted. This is us rebooting, right? Because I don't know what's wrong, reboot it. We'll see what, let's see if it fixes it, right? That was a laugh line. I don't know what's wrong, let's reboot it. Okay, so we re-rooted it, eh, it's still happening. Okay, revert. Now, straight through. This is the CPU usage. This is actually a real graph. So, basically none, right? So, this is all just do-do-do-do-do-do-do. Like, I'm busy for no reason and this is the actual traffic. So, you can see it's completely uncorrelated with the traffic on this service, right? The CPU was just going bananas. But memory was fine. And so it turned out there was a CPU leak. This is another pretty graph that somebody submitted. I don't know if that's a real word, but basically there was stuff going on that was not real. And the root cause wound up being that HTTP client, which is part of .NET Core, or .NET also, implements iDisposable but you should not dispose it. Okay, don't dispose it. There's a great blog post I love. You're using HTTP client wrong and it's destabilizing your software. It's like one of the best titles I ever read for a blog. This is actually old. So, this is even for .NET Pro Framework. So, you're probably, if you're using HTTP client and you're disposing it, you're probably having this problem today. And you just don't notice it because of something that happened. Basically, they wrote a new managed sockets implementation. Remember back to that performance thing. Oh, we can squeeze out an extra 10 milliseconds on this baby, yeah, or microseconds or whatever. But unfortunately, it interacts very badly with this bug and it causes horrendous CPU performance for nothing, basically. So, the fix is, oh, you can do a static HTTP client which is what the guidance used to be but the problem is in Kubernetes and other container orchestrator type things if you ever have DNS changes, static HTTP client doesn't update your DNS. So, you wind up all of a sudden having bugs whenever things change their name or IP address or whatever. So, you have to use this new thing called HTTP client factory and I'm not gonna go through the code but basically you set up a thing like, for example, GitHub and then later you resolve GitHub and that's your client, okay. And so, behind the scenes, HTTP client factory will do all the DNS, re-resolution things and whatnot and this is the right way to do it. So, do it this way, don't do it the other way or you'll wind up with CPU problems if you're on 2.1 or higher.net core. We had another bug with umkills out of memory like Kubernetes was just like, nope, sorry, you're dead. And basically what was happening was that garbage collection was not happening soon enough and so we were getting umkills and it turned out that in the fix, I love basically, whoops, we were using the wrong way to determine how much memory was being. So, they were just using the wrong number to figure out when GC should go and so they switched to the right number and that made it better except that, oops, we were still getting umkills and there's another issue out here and there's still some talk about this. If you're using small containers, it's a problem. So, you don't have Kubernetes, the swap file and Kubernetes and if you have a server GC turned on, this is sort of the mitigation. You get one heap per processor. So, what happens if you have a 48 core machine? All of a sudden you're allocating multiple gigabytes of memory for a service that doesn't do that much, right? So, you need to cut that back. Instead, you can actually use workstation GC on your services running inside of Kubernetes and that means you have one heap and it also tends to run garbage collection a lot more and so this is a workaround because you don't wind up wasting a lot of space and time doing something that's not actually useful. So, it's something to consider using investigating workstation GC for your services if you're running .NET Core on Kubernetes and basically if you have low memory limits. This is today the only way to do it. So, this is actually not that it's not fixed. It's just something to be aware of. They're probably gonna do something about it at some point. Okay, so how we deploy .NET Core services. So, we use GitHub. We deploy to Jenkins that does the build. We use CodeCov to check out what our test coverage is. We deploy our images that come out of Jenkins to something called Quay, which is a private repository for Docker images. We use something called Spinnaker which comes out of Netflix, thank you. So, that's what actually puts our stuff out into Kubernetes. So, .NET Core 2.2 came out in December, yay. Okay, we have some event tracing for Windows features. We have SQL access tokens for Azure Active Directory, code injection prior to main. Why not just change the code? Arm32 on Windows, okay. So, this is basically the most boring release of .NET Core. It's fine, it's 1.1 higher. That's fine, it's like this talk. So, it's kind of boring. Anyway, just to wrap up. So, .NET Core definitely works in production today. We are building our business around it and it also works really nice with the protobuf stuff so people can do other languages as well. We're really, really happy with that. It's good for services, good for command line tools, good for website back ends. It works really well with Kubernetes with the exception of the UMKIL thing. If you have big pods that don't have tight memory limits, it's really, really nice, it works very well. It interoperates all of our other tech with your PC and it's super fast and more or less the bugs are worked out now. Or, hopefully we run into it and we know how to work around it. So, yeah, we're really happy and we're continuing to invest in .NET Core and that's the talk. Thank you very much. Thanks. I think I got two minutes or otherwise anyone has questions I can answer after. So, yeah, go ahead. I have a quick question. Sure. What would be the end for today? Hope to move to .NET Core. So, not like you, to 1.0. Yeah. Well, I would hope, yeah, I mean, definitely the two things I mentioned, UMKILs and the HDB client being really weird or different requiring HDB client factory in 2.1, those are the only things we're aware of. It depends on kind of what you have. Like if you have web forms, it's rewrite. If you have MVC, it's close to rewrite. You know, it really kind of depends. So, I highly recommend it for anything that's new. Check out .NET Core. You can see what you can do with the .NET Core, .NET new templates. Or they're based in the Visual Studio. If you like File New Project, they work really well and that's what we use to do it. So, anybody else, I think I have one more minute. Sure, what's up? I mean, we consume DLLs as part of like new get packages but not at runtime. So, no, are you aware of some challenges? Oh, I see. It's entirely possible that's the case. Yeah, I don't actually know. We don't do that. Oh, is it? Cool, very good. Neat. Cool, anybody else? Yeah. Okay, cool. Interesting, I did not know that. Sure. You can specify how many disks you want to have. Yeah. Yeah, but it's workstation and service. Hi, you're really, I'm here. So, you're still helping? Think I'm out of time. Okay, yeah, and we found, I think this is a shorthand way of working around it where with workstation GC, we'll tend to collect faster and keep a lower memory footprint overall. Server GC assumes that it's the only person on the server, right? The only app on the server, and so it'll kind of use that memory until it runs out. I think I'm out of time, so thank you very much everybody for having me here.