 Everybody, I'm Josh. This talk is on Nick's Kubernetes and the pursuit of reproducibility. Although Nick's is in the title, I'm not necessarily as pretentious as you might think. But I'm really looking forward to talking to y'all about some of the cool stuff that I've been doing with Nick's and teach you some of my learnings. So yeah, my name's Josh. I've worked at a bunch of companies in the Kubernetes related space. Right now I work at Reddit. So if you use Reddit, your traffic goes through Kubernetes. Pretty cool stuff. I also wrote a book called Production Kubernetes and I learned that VMware, although I don't still work there, is giving the book away for free all 500 pages. So if you're interested, you can download it. It may or may not validate whether you put in a real email address. I can't speak to that. You can figure that out on your own. And before I got started today, I just wanted to quickly make a quick mention of someone we lost this year. I think many of you know Chris or had heard of her, at least through the keynotes. Chris was somebody that I both found to be a great coworker, great friend and great climbing partner. And one of the things that really inspired me about Chris is how we could be outside climbing rocks like the ones you see and then talking about how PROC FS has its limitations. Which is a cool melding of worlds. And I just want to say we really miss you, Chris, and we love you. So how did this journey into Nick start? So this is where I was living when I started playing around with Nick's OS. This is in rural Kentucky. I was actually SSH into my servers that you're going to meet in a moment here. And I was doing the thing that I always do. I'm trying to tinker with something and change something about a configuration. And then I realized that I wrote a file to disk that I forgot, changed something in the Etsy directory, and I never put it in Ansible and reflected it or in my packer builds. And I was just like, you know what? Maybe it's time I start experimenting with a new hot technology that I keep hearing about. Now, Nick's has been around for a long time, but I started playing around with Nick's OS, the operating system that uses the packages and the packages use the language to kind of build a Linux-like operating system. And while I work at Reddit, I'll admit I submitted this talk before I worked at Reddit. So Reddit doesn't actively use Nick's in any serious capacity, but I will try to tie in some of the learnings I have from Nick's and talk about whether they might apply to Reddit someday. So this is my server. The server that we're going to be working on today in the demo is called Hades. That's the god of the underworld in case you're curious. Hades is in the middle right there. I often fondly refer to my rack as hell because my rack sits underneath two water lines in my laundry closet. So someday hell will break loose. I can promise you that. But yeah, we're going to be actually using Nick's to set up a hypervisor today, a virtual machine, container image, and try to run all of those together. In fact, we'll go ahead and get started here by SSH'ing into Hades. But yeah, this is truly talking to that server you just onto the water line. So if the demo breaks, it will literally be because the water line broke, most likely. Okay, excellent. So let's talk about some key things we're going to get through here. So if you're not familiar with Nick's, the language itself, positions itself as something that lets you build composable software with maximum flexibility and the builds being as reproducible as possible. And that'll be a central theme as we look at how some of these different things shape up today. Now, this is probably a diagram you've seen many times in a lot of our worlds. We're just at the VM layer in the container layer because we're using like maybe Google Cloud or AWS. I mean, some of us are doing like bare medley stuff in cloud providers, but most of us aren't as concerned about the hypervisor nowadays, but some of us still are. And what we're going to look at again is that layered cake. We're going to start with like, what does Nick's look like setting up the actual hypervisor layer, the VM layer in cube in the container layer as well. So let's go ahead and dig into that by talking a little bit about the concept in Nick's of modules and packages. Okay, so if we go into Hades for just a moment here, what we're going to find inside of Hades is that there is a configuration file inside of Etsy, uh, Nick's OS. Okay, and then these are some common files that you'll see inside of a Nick's OS install, mainly a configuration Nick's. Oftentimes there is a hardware Nick's inside of there that does some detection around some of the details of the underlying hardware in the configuration Nick's is the primary place where we put all the good stuff about how our system should be configured. In fact, I will go ahead and open up one of these configurations for the hypervisor real quick. So let's go in here. Let's open this up. Let's do configuration. Okay, so we're going to look through this file a little bit as we talk about the composition of Nick's. But one of the key things that a lot of Nick's users end up working with first is this idea of specifying a set of system packages. Nothing too crazy. There's a ability to go out, search for packages like you're used to with a lot of different package managers. There's some nuance to these packages in the Nick's world, but for the most part we can go set some of these things up and they will be installed on our target system. So that's what the Etsy Nick's OS configuration Nick's file is. Now, since we're talking hypervisor, here's what our stack's going to look like today. It's going to be KVM with QMU talking to it and then us going into, through the lib vert layer with tools like verse and all that kind of stuff. Because while I could use VMware or something more reasonable, I don't value my time. So I set all these components up and then build on top of them to build a custom hypervisor solution. So getting these packages installed, what one can do is you can go into the Nick's website and you can search for something like lib vert. In lib vert, we'll show up as a package. You can get some details about how the package is created, how the Nick's language was used to create that package, and then some of the related programs to the package. But what's interesting about some packages in Nick's OS is if you install it on Arch or Ubuntu, you'd probably expect some things to come along, things like a system unit, some other configuration that kind of ties itself to the package. It's not always the case with these Nick's packages. And that actually brings us to our second concept in Nick's. It's the idea of options. So options are powered by these things called modules where we actually can express ways to install packages, set up configuration, and so on. And I think one of the most compelling things I've enjoyed with Nick's is that this module model, we can kind of build APIs for our systems, let people plug values into those APIs and do some pretty darn complex things, as we'll see here in a moment. So let's go ahead and go to the website again. And in the top bar up here, there's also the Nick's OS options. And an option we might search for is DHCPCD. All right, and we can see that there is a networking option in Nick's OS for DHCPCD. And this is configured by default as a Boolean True. So by default, when Nick's OS boots up, this is what's going to try to grab us an IP address and set that IP address up. And we will look at the config for that in a moment. Now our hypervisor layout that I'll show you some of the config for in a moment kind of looks like this. It's pretty standard stuff. You got a bunch of virtual machines. They connect to some BR0 bridge device. It acts like a virtual switch. That thing is attached, if you will, to the ethernet interface for the physical adapter. And effectively that is our hypervisor, okay? So let's go to here. Great, okay. So let's go into the configuration file and just take a look. So there's a couple things that are going on inside of the configuration file. The first thing is that I actually am turning the DHCPCD off and using systemDnetworkD, which might give you a lot of feelings depending on how you feel about that. But systemDnetworkD works really well for me. And what I typically do with systemDnetworkD is I'd go, I'd write a bunch of unit files. I'd set them up. I'd make sure that the network can attach, or sorry, the bridge device can attach and so on. But what's nice about this is I have a declarative way through a bunch of these modules and APIs to express these types of pieces. And as I'll show you in a bit, what's kind of neat about this is it'll take us a step further and we'll actually be able to derive a reproducible, reusable setup of just this config that we bring into our system as well. So the whole networking layer we have configured here, you made them notice that things like Libvert and so on are all packaged inside of the virtualization modules and libraries themselves. So Libvert is enabled. We're specifying that it should use BR0 and so on. So effectively I can package all the configuration you see up here. And when Hades 2 comes along in that great day, hopefully the rack will move at that point. But when Hades 2 comes along, I'll just bring this configuration on. I will run it. It will build everything, set up the whole file system and I will be good to go. So we'll move on a bit from the hypervisor. That's already set up on Hades because it was too low risk to demo setting that up here. But what we are gonna do next with Hades is we're actually gonna set up some virtual machines for Kubernetes, something that a lot of you are probably quite familiar with. Look at the common components, what that looks like in a Nix ecosystem and then how we could in theory set that up. So Kubernetes, what does Kubernetes look like? Well, Kubernetes, typically in a virtual machine which is what we'll run today, we'll have a container runtime, we'll use container D. It will have the Kubernetes bits itself, the kubelet, the API server, so on and so forth. And then also kubeADM, which is quite interesting. So I'm gonna be using kubeADM which will actually init the node and set some things up. It's a bit of an impure approach in Nix LAN. People don't exactly love this approach but I'm gonna talk about why we're using it as we kind of move along. But the first thing we need to do again is go in, set up a VM with these things, almost like Packer style. Show of hands, does anyone use Packer in the room? Okay, great. So this will be very close to what you're used to doing with Packer and so on, okay? But using the Nix packages and all that good stuff. Okay, so let's go back into root here and let's look at a VM setup for a moment, okay? So we're gonna go into VM images and we will open up the configuration file for that. And it doesn't look too different from our hypervisor, right? Basically what we need to do is we need to figure out what is the baseline packages that a node is going to need to run Kubernetes and what are some of the different settings that we need to set as well. And if you're like me and you never write things down or make them scriptable like you should, you've probably looked at this page a lot about container run times. Does this doc page look familiar to anybody? Have you seen this? Okay, great. Yeah, I mean how many times have you forgot to run these scuttle commands, right? When you're setting up nodes and stuff you have different things you need to turn on. You need to make sure certain kernel modules are turned on that some of the CSTL commands are run and so on. And then if you use container D, I don't know if you've ever been plagued with the switching between drivers under the hood using C groups and all that stuff but that's another thing we need to configure. A bunch of pre-rex more or less we need to do for our node. Well, similarly in this Nix OS configuration file we have a bunch of different stuff. One of the things that we need to update here which I'll show you the implication of this in a moment is going to be Hello, KubeCon, Chicago. So we have the most up to date node. And inside of here we have the boot module that has kernel setting, CSTL, we can set all of these in here. We're loading additional kernel modules that Nix does not have out of the gate and we're setting up a bunch of packages including container D, CRI tools, ETH tool, SOCAT on and on and on. And then another interesting thing is that we're also specifying our system D unit in here as well that should be baked into the specific system. Now, system D is a pretty interesting one. So this is going to be something that I'm doing because I am installing the KubeLit manually. So I'm putting the KubeLit package in, right? But it doesn't come with the system D unit by default which is a bit of a deviation from what you might be used to. So I'm expressing the system D unit file here for the KubeLit, putting details about KubeADM, all that good stuff and making sure that all of this is set up inside of this Kubernetes host. So effectively a declarative way to specify how to get, how to build, how to grab all of the different assets from the Nix package ecosystem and bring them into this virtual machine. Now let's take a look at what actually building the virtual machine looks like. And that might give us some insight into like the implications of some of these things, okay? So I'm gonna go back into our buddy Hades here and we're gonna go into HyperNix. We're gonna go into the VM images and we're gonna run a command called NixBuild. And don't worry about all the flags and stuff. I'll show you a cool GitHub repo that talks about this. Now, first thing we'll notice that's kind of cool. Since we're specifying an API with a module we can check things. We can see if certain things should be deprecated. We can warn our users, our configurers about all of the different things that we might have set at the system level. The next thing you might notice, aside from a bunch of cool scrolling text, is that it's doing a bunch of stuff in these things called the Nix Store on my system. Now essentially what's happening here is Nix has a build specification or it has a way to specify builds that create these things called derivations, right? And that's what you see in these dot DRVs. In fact, MOTD, that is what I changed to say hello from Chicago, right? That's message of the day. So even like a thing like message of the day in Linux which is just like a text file basically, that is something that we're actually packaging up. It's getting its own Shaw from the perspective of the actual build descriptor itself and will actually be packaged as an asset in the Nix Store behind me here. So now you can see some of these packages with their Shaw value for the build that's referenced. DB, you have the LVM2, you have OpenSSL, all the typical stuff that's all composed, all referenceable and so on. In fact, actually, okay, so we got our build here. Let me just zoom up a little bit more. You can see where it actually is building a QCOW image inside of here. Going through, building the image, building the file system, setting everything up. In the end state that I'm ending up with here is actually one of these. Let's go here and let's LS it and that is the machine image that we've just built that theoretically has the Kubernetes bits on it. Pretty cool. Now check this out for a moment, humor me. So if we go in and we did configuration.nix again and let's say we went back to Amsterdam, okay? So let's go back to Amsterdam, not spelled that way. Maybe I typed that word into the keyboard too often. Okay, so then we run nix-bill. I still get warnings, of course, but once this moves forward, we should see intermediate output. So what does this mean? Well, by understanding the derivations at every single configuration setting, by understanding them at every single package setting, even like a shell script I'm gonna put in the bin, right? All of that can be pre-computed, calculated, understood, rolled back, more buzzwords, right? And it's placed in these stores and accessible, so it knew, oh, I already have something that represents exactly this configuration that you are now putting in. Amsterdam already happened, right? So effectively we can build these things, have all these assets be reusable, referenceable, and it's turtles all the way down with nix, right? It's a bunch of nix language specifying these pure functions. They take an input, they build a thing, they provide an output, right? Okay, so without further ado, let's see if we can get a Kubernetes cluster up. So again, I have a nixOS image right here. We're gonna go ahead and do a quick CP of this nixOS image into var lib libvert. We will do images and put it in kubecon. Let's call it katesbase.qcow2, just to make it a little bit obvious here. All right, next what we're gonna do with Hades is we're going to actually tell it to start spawning the different servers based on this base image here. So all on the hypervisor itself. So let's go down here, let's get this session into Hades as well. Thanks, Hades. And let's do a watch, just ignore by the way that I'm only a root user on the system. So we're gonna go ahead and watch a verse list. So verse, if you're not familiar, is just going through libvert and it's understanding what are the different virtual machines that we're setting up. We'll keep a watch right here. And then up here to speed a long time, I wrote a little script, we'll take just a quick look at it here. And that's an ugly color pink, it's hard to see. But this is basically a quick script that's gonna run a command called vert install. It's gonna take our Kubernetes base image up here and it's gonna start instantiating virtual machines inside of Hades with this kates base. So let's go ahead and do that right now. We will do nix vert install. So it's gonna copy over some images. We're not doing anything cool with pooling. I mean, since we're running our own bare metal, look how fast that is, right? So essentially, that was not a dig at cloud providers, by the way. So essentially, we have three different VMs set up and we have them running now. And now it's really just a question of figuring out what are the IP addresses of these so we can join them and have a functioning Kubernetes cluster. So let's check exactly that out. We'll go into Hades, I'll be bad and go into my gateway real quick. 192.168.11. This one is actually not root. So let's do one of these here. Cool. Spaces. Excellent, excellent. Okay, show DHCP leases. Cool. And we will just keep refreshing these. Actually, I think these bottom three are in. So not that it means that much to you, but along with a bunch of like the litter robot at my house, that's our litter box that has an IP in it. That's embarrassing. We have three hosts at the very bottom there. I think 220, 221 and 222 are our VMs that basically just bootstrapped, okay? So let's see if we can get into these and reason about what's going on here, okay? So we'll go back into Hades again. That's kind of our portal. We're just gonna do it all from the hypervisor. And then if we SSH into this first node at 220, we'll do one of these. We'll say yes, we'll do root. That looks good. And hey, hello, CubeCon Chicago. Pretty sweet, right? So we've essentially got our NixOS virtual machine, bootstrapped, unique IP, so on and so forth. And we're gonna effectively set up three different terminals here for all three hosts. Let's do that now. So we'll SSH H2, SSH H2. Hades is less happy with this terminal size. That's okay. And then we're gonna go ahead and do this IP address one more time, but incrementing it each time. So let's do SSH 192.168.1.2.21. Yes, root is my user and 221's good there. SSH 222. All right, we'll have a cluster here in no time. So this and then root. Okay, now, again, I'm being a little bit impure in NixOS terms and I'm gonna be using CubeADM. And CubeADM is going to pull down the necessary assets, configure the system, which should ring a bell if you kind of think about the purity of what I've been talking about with Nix, where the system config maybe should have already happened, but we'll get into that. So we're just gonna run a CubeADM in Nix on our control plane node right here. And just like we'd expect in most environments, show of hands, who's used CubeADM? Okay, most of you, great. So CubeADM, if you're not used to it, it probably runs under the hood of whatever you're using potentially. So CubeADM's grabbing different stuff. It's yelling at me due to version mismatches and whatnot. But once this control plane node is up, we should actually be able to join our two worker nodes in Hades in just a moment here. Okay, so we'll get this set up. We'll go ahead and yep, there we go. Making some moves, making some moves. CubeLit is getting booted up right now as well. Let's go back to our slides while that's going and see if there's anything else. Oh, the VM builds that I had showed you. Let me just make one quick mention of something that's kind of cool. Kind of similar to what you could do in Packer, but we express this ability to have this modular configuration that has all the dependencies, all the configuration and so on. Now, there are a lot of really easy ways that we can actually add different details for how a raw image might work versus a copy on write image versus AWS AMI and so on. And there's a couple of ways to do this in the Nix ecosystem. The one that I use if you're gonna go home and try this is on a repository called NixOS Generators. I've just kind of cloned this down and built off of it. But what's cool about NixOS Generators is I can take one of those configuration files like I showed you and then I can use these generators which are like extra Nix modules, if you will. And I can generate a list of whatever I need as far as what my virtual machine technology should be. So something that you'd kind of expect is table stakes, but just calling out that while we did this in QCOW or copy on write, you can totally do it into other providers as well. So if I wanna run multi-cloud, this very advanced home lab, right? In theory, I could start producing AMIs with very similar configuration on them as well. Awesome, and we'll get to containers in just a sec. So, all right, we have the cluster up. Hades seems happy. Let's go up and start off by grabbing the configuration and not, there we go. Let's go grab the configuration. Awesome, and then we're gonna grab the cube ADM join token from down here, okay? So on the control plane node, let's just do a quick watch for cube cuddle, cube cuddle, get nodes real quick. That'll be easy enough. And then let's go down to Hades spawn three and Hades spawn four. And as these run their cube ADM commands, we'll actually see the control plane start seeing workers join. And now we have a three node cluster. Now, of course, the cluster is not ready. This is probably something we're all really used to because if we go over here and we do a cube cuddle, get pods all, what are we missing? CNI, yes, now, question for y'all. What's your favorite CNI? All right, I heard Sillium, great. He might be biased, but I did hear Sillium. Okay, so let's install Sillium. Well, I don't have Sillium on my host, but that's okay, let's go over to Nick's packages for a moment and let's see if that vibrant package ecosystem happens to have anything relating to Sillium. Oh, the Sillium CLI is inside of here. Well, one cool thing about Nick's too, as like an end user more so, is that there's a lot of clever ways for how you make certain binaries available in your path. It tries to not pollute to the whole system space by giving you ways for certain users, even for certain directories, which is really slick, like your specific NPM versions and stuff, you CD into them and boom, you use Durand and all the tooling to give you your right packages. Pretty cool stuff. But for right now, I don't have time to show you that. So we're just gonna use the Nick's shell command. And if we add the P flag, we can say, hey, go ahead and give me, not that, give me the Sillium CLI. Nick's shell will go, okay, sweet, no problem, Josh, I gotcha, this is going fast again because it's running on Hades, not on the Wi-Fi here. And now we have the Sillium CLI. So if we go in and we do a quick Sillium install inside of here, we can have Sillium reach out to that cluster using the cube config and Sillium will start instantiating the CNI on it. And eventually we will have a fully functioning cluster, which is only gonna be missing one more piece of the puzzle, which will be our containers. But before we go to containers, we'll make sure that our host is healthy here. And while it's kind of bootstrapping, I'm just gonna do a little bit of cleanup. We're gonna drop back out into Hades here. We're going to SCP, a cube config over for something that's gonna happen a little bit later here. Let's do that, that looks great. All right, so in theory, unless Sillium's lying to us, we should be good. Let's do a quick Sillium status with their CLI. So again, seems pretty good overall. Things seem to be healthy. Let's do a cube cuddle, get pods all. And now we have Sillium up. And more importantly, the big thing that was not ready before was Core DNS. So now Core DNS has networking. It's ready to go, it's happy. Hopefully we're all happy. Okay, fantastic. So yeah, let's see if we can talk a little bit about actually getting a container image on this thing and then wrap it up. So container images are interesting. They're images like a lot of things. A lot of you know that container images are just kind of like tar balls with a little bit of specifics around how they're laid out and so on. So Nix, like a lot of things, has pure Nix functions that actually give you utilities to build images. One of these utilities that's quite common is called Docker Tools. And it basically provides an ability to do Dockerfile-esque things, but a lot of people in the Nix community probably argue in a much more reproducible way, right? Where we can actually specify certain elements of the packages we build and even put different assets and scripts and things and have all of those be referenceable and reproducible. Pretty cool, but for layman like me, basically we have something that looks a little bit close to a Dockerfile as far as config goes. And let's look at a little bit of a fancier version of this real quick here. So I'm gonna go back out into here. We're gonna go one more and we're gonna go into containers. This is where I put it, great. All right, so let's look at this Docker configuration that's gonna go through Nix. So this is Nginx, the canonical demo when you're not using WordPress. And essentially what we're doing here is we're gonna be able to instruct Nix that we wanna build a completely layered image. What we want here is every layer of that image to basically be like one of the Nix store things. Like we'll compose one of the Nix store assets, the binary, the system, the, I don't know, whatever we're building up here. And then we will put it in a fully layered image. Now there's some magic happening here because this is gonna actually use a scratch image sort of. So there's not gonna be any like baseline OS although doing things like that aren't impossible. So we're using fake NSS to get around some of that. And then we're gonna be bringing in packages Nginx as a dependency as well. If you look down here, you can actually see the config. They try to keep this somewhat close to what you'd see in a Docker file. But there's a whole page of docs on containers and how you can build them and so on. But I think one of the kind of cool and interesting things to note here that's maybe a little bit nuanced is that in the conf file up here or the conf configuration, I'm specifying a bunch of things that will produce these derivations and will produce these kind of reproducible blobs that go in the store. So web root is one of those. The port even, the maybe, yeah, the port sort of, the Nix configuration, the right text piece is going to have all of the Nginx config. So all of this will be referenceable, have a hash associated with it. It'll all be associated. And then the great thing is you can see I'm referencing the variables lower here, right? And if I go even lower in here, you can see like the Nix port and so on. All of these things that I'm packaging and building this reproducible thing with Nix through variable substitution and so on is now expressed in the Nstate build layered image. So let's build. So remember when I used Nix build a little bit earlier, it was quick, but Nix build built our VM images for Kubernetes? Well, we can do something similar here too. So we can do Nix build, test Nix. It'll go awesome. Oh, it just popped a tar GZ out here because it said, Josh, you didn't change anything and I've evaluated your config and its hash and your binaries and their hashes and we already have one of these, stop bothering us. But we'll change it a little bit here. Let's just change the tag. We haven't blown up yet, why risk it? So we'll change the tag, we'll do a build and then you'll be able to see a little bit about what is essentially being created here in just a moment. So let's do that. All right, great, cool. So if you look closely, you can see 54 layers. These layers all reference the different stores that represent everything from like GCC to Zlib to all kinds of stuff, right? Obviously Nginx is in there as well. You can see the layers expressed up here and so on. So in theory, we've got a fully functioning image that's kind of good to go. So let's see if we can get this thing uploaded. I will go ahead and look at a file that was created, let's bring it higher. It's called result. So let's go ahead and run a docker load and we will load result into docker. Of course my user's not in the docker group because why would I ever do that? And I've loaded the image now into docker and now I'm gonna go ahead and push on the conference Wi-Fi, but mostly the layers should exist. So in theory, this might still work. So we'll go ahead and push JoshKubeCon. There's all of our layers representing our different things. The layers are pretty much all existing because we did a metadata change here for the tag, metadata on the tag, I should say. So now we've got the SHA value up and good. It should be available to our cluster and we should theoretically be able to wrap this thing up. So let's go back over to our buddy Hades and I think it was in this one, yep, that was it. Let's see if Hades can reach the cluster as well. So we'll do a kube cuddle, get pods. Oh, Hades doesn't have kube cuddle. Well, we know how to fix that, right? We can do nix-shell p, grab kube cuddle. We'll do kube cuddle in the path here and we'll do get pods. No pods in the default namespace, excellent. So Hades is reaching this. So let's make it real simple here and we'll have one line of sight into Hades. Okay, wrap it up. So we have a pod yaml. This pod yaml is a very simple pod yaml. It's a pod, it's a message from the underworld. We're gonna go to image here and we're gonna increase this to 1.4 and we're gonna save the pod yaml. Now what we will do here is I will go ahead and apply the pod yaml, so we'll do that. And then we will do a get pods and see if a message from the underworld starts up. Okay, so it's creating. In theory, if the image is available and able to be pooled, we should see a message from the underworld show up on our host. Again, even this container all built using nix. So let's give it just a moment here to, in theory, set up. A message from the underworld is up. Let's go back to Hades and let's do a port forward and we're gonna port forward here. I'm gonna bind openly so that I can access it and sue it to any host that is. And then I'm gonna do it to message the underworld on 8080. We'll hit enter. We are port forwarded to that awesome container that's a very reproducible container, mind you. And we're gonna go into our browser here and we're gonna go to H2, 8080, and there's Hades. Oh, there's the marquee. I use the marquee tag in HTML. Pretty cool stuff. Yeah, thank you. Thank you. I bet this is the first demo where you saw the marquee tag being used, right? Pretty fricking cool. Yeah. Has anyone played the Hades game before? Show of hands. Wow, awesome. I'm so psyched I put this reference in. That's fantastic. All right, everybody. So in conclusion here, let's wrap it up. I hope you enjoyed this kind of rambly, crazy talk on it. One of the things I wanted to wrap up with is what I use NixOS and Nix at Reddit. So you might be surprised. I'm pretty excited about it. Think it's really cool. No, I wouldn't introduce it at Reddit. Here's the reasons why. I actually really like it for hypervisors. I really love setting up my hypervisor with Nix. It's so lovely to have something reproducible, something I can tune. I can understand the system configs. But at Reddit we mostly sit on cloud providers. For VMs, maybe someday, if I could get my coworker Jamie to agree that we should do it, it's not out of the cards. But I think there's a lot of models that might fit us a little bit better. For example, there's Bottle Rocket. There's Flat Car. There's more container-native things that try to focus on actually deploying containers and so on. Of course, CoreOS, which is my old company, love CoreOS to death. And then for containers, here's the big thing that the container sentiment has that I'm just saying in general. I really like Nix. I think it's really frickin' cool, right? However, I really wonder if it's going to struggle from the whole versus-better concept. I don't know if any of you are familiar with versus-better. This is a history of the progression of Lisp when C was gaining prevalence throughout the ecosystem, right? You should look it up. Google versus-better. It's a really cool article. But what I'm getting at here is like, I wonder if Nix is waiting for its Docker file moment, right? Where the backbone's solid, the purity, the things that you can build and compose are insane. But we need to make it reach end users, especially people if we want them building container images. And I'd argue maybe a little bit easier even for like VM images for Kubernetes as well. I think that's a huge opportunity to make the Nix ecosystem flourish. And I admit, although I'm naive to some of them, there's a lot of projects trying to do just that. Thank you all for your time today. I really appreciate it. It was a pleasure talking to you. Thank you. We solved really cool problems at Reddit and I would love for you to come work with me. So if you have any questions, if you have a careers page, I'm also here to answer any questions. I think we're out of time, but I'm happy to answer questions if you want to come up to the stage or the side or whatever and we can chat. All right, thanks everybody. Later.