 The next presenter is Oliver Charles, who's been using NixOS for approximately two years now. He's a Haskell software developer at Circuit Hub, and he's going to share the insights he had working at that startup company using Nix in production. I should mention that the most significant contribution of Oliver to NixOS as far as I'm concerned is a series of blog posts in which he explained how he uses Cabal to Nix to register bills and Nix packages so that he could use them. And I remember that it totally blew my mind to realize that you could actually document these things so that people... So whenever you search for Nix and Haskell, that's still by far the most popular Google Hits. So have fun. Cheers. Okay, so I want to talk about how I've been using Nix in production, and specifically how I've been using Nix in production at the startup that I've been working for for the last couple of years, which is called Finder. And I remember when NixConf was first announced and there was a call for contributions, I was really excited about being able to give this talk because I thought, oh, finally I can kind of show the world that this is an operating system that's not just experimental and academic, but it's really serious and ready for the real world. But then only a couple of weeks ago when I was thinking about this, I was like, wait a minute, I'm giving a talk at NixConf. It kind of already proves that point. So that's probably better validation that this is a serious operating system than just my talk, but maybe I contribute a little bit more kind of evidence that this is really ready for the real world. So first, yeah, a little bit about me. I'm, oh, Charles is probably, most people know me online, or Ollie in person, and I'm a Web or full stack programmer, primarily, currently specializing in Haskell. So by full stack, I mean that I kind of do the boring HTML, CSS, JavaScript stuff, like to do more stuff in the backend using Haskell, talking to Postgres services, and things like that. But for all of the projects that I've been working on, I've also had responsibilities for running servers and making sure that things are actually staying online. So this is where my interest in Nixos comes in because I have over time found that this is kind of the right way I want to be running my servers. So I've been a Nixos user since about 2013, fairly low volume contributor to the project in terms of code. I think about 260 commits the last time I looked. A large portion of those were all Haskell packaging things, but now Peter has made me entirely redundant by just automating me away entirely. But I still do a little bit of packaging on kind of modules, and I'm like the maintainer for the Postgres module and a couple of other small pieces. I absolutely love this quote. A couple of people might have already seen it, but I was initially very skeptical about the Nixos project. So this is a comment I left on Reddit, probably back in 2012 or something, when Nix was posted to the Haskell Reddit. And at this point, I think I had maybe free maybe six months experience with Haskell. So I was a complete guru on all things about purely functional programming. So I read like one sentence on the homepage, which said that Nixos is this purely functional package manager, and I was just like, no, that can't be true. So I jumped straight into the comments section, and I said, well, it's not purely functional, otherwise it wouldn't actually do anything. I don't think just because you treat your objects as immutable, you can call yourself pure, not in the sense of pure functions. The package manager still has side effects. It has to actually download and install things. So you can be immutable, but not pure, and that's fine and a good thing, but I don't think mismarketing is a great idea. I actually think going straight to the comments section when you know nothing about a project is probably not a great idea. And really, I missed the point of Nix. I mean, the point of Nix is about this idea that if the inputs don't change, then the build result won't change. Of course, it's gonna do things that you do the evaluations, but pure functional programs, you know, they heat up the CPU and we don't worry about things like that. So I did eventually come back to the project and gave it another chance and really read the kind of research papers around it, and this is where things really started to click. And it slowly began to appear on all of my devices. And since 2013, I haven't looked back at all. So the start that I'm gonna talk about is not the one that I'm currently working for, but it's the one that I have been working for for the last three years. And that's a little startup called Finder. And to give you a bit of context there, Finder is an online booking system for all classes and appointments. Primarily we were focusing on the kind of wellness industry. So this is really small businesses that are just one or two people running them that offer things like meditation or yoga classes and things like that. And we offer this as a service model, so very web 2.0 or whatever the current thing is now. But businesses should be able to sign up and be ready to take bookings and receive payments for their classes really within minutes. And we were kind of entering this to try and disrupt a market that previously only had very complicated or technical solutions. So the existing booking systems are things that have been around for five, maybe 10 years. They assume you've got a company with tens or maybe hundreds of different employees and they tend to be quite kind of archaic and technical. So we wanted to say it was really simple and just let people have these ideas for running a business and to get their business online. And that comment there about being able to get online in minutes is actually something that we think we did manage to achieve. The last time we ran through setting up a kind of fake business, we could get taking strike payments in about four minutes. And finally consists primarily of two main applications. So we've got what we call the widget, which is what you would embed as a business on your webpage and that's what allows customers to manage their schedule and make bookings with you. And then there's an app in application as well which allows businesses to actually manage the schedule that they're offering their customers. So this is what a customer would see. This pops over your existing website. You can see a list of all your upcoming classes, all the upcoming classes that are on offer and you can book yourself in. And as a business owner, you get this kind of calendar view to manage your upcoming schedule and this kind of basic front desk screen that we call it to see who's coming to your classes and kind of deal with things over the phone. From a technology perspective, it's a fairly traditional stack, I think. Both of the applications are written in Clojure script which compiles down to some static JavaScript files. Then we're gonna run that on some sort of HTTP server. We use nginx. And these talk to Haskell powered API servers over HTTPS, just basic rest and web sockets. And then all the data there is stored in Postgres and Redis. So very traditional stack, but to set the context of the kind of service administration that I was gonna be needing to do, that's basically what we're running there. Sorry? Well, it would be a traditional Haskell stack maybe if I had GACJS as well. Okay, so let's dive in with some of the next stuff. For my day-to-day work, I actually need to write some code and build things and add new features. So I need to be able to enter some sort of development environment. So I think a couple of years ago, the nixshell command was introduced which really kind of revolutionized my way of doing my kind of day-to-day development. Because nixshell allows us to spin up kind of per project development environments without clobbering our main kind of global or user environment. So for the Haskell side of things, this is really straightforward these days because we've got the kval2nix tool which basically automates all of this process. We use kval2nix to generate a default.nix expression, which is basically a function that specifies how to build a certain project. But you can't actually run functions. You need to supply the function arguments to do the main evaluation. And that's what we use the shell.nix expression for. So shell.nix for very simple projects is really just saying import all of nix packages and run this default.nix expression. For slightly more complicated ones, it says, well, you actually need these other projects to be built first. So we override the Haskell environment and specify those into project dependencies. And a little bit later, I'm going to show you what that actually looks like. For closure script, the story is not so good. Closure script doesn't have great support in nix packages and at the time, I unfortunately couldn't put the time in to make it any better. So we have handwritten expressions for closure script which do enough of the job, but they don't do a great job of managing all of the kind of library dependencies that we use in closure script. In order to ensure that everyone uses the same versions of things, we have our own fork of nix packages. So the nice thing about nix shell in general is the amount of control you get. So languages like Python have virtual M and Cabel has Cabel sandboxes, but these only give you a kind of subset of control over what your actual development environment is going to be. Using nix packages and nix shell, we're able to control compiler versions, specific patches that are installed on compilers and stuff which we haven't needed to do, but it's nice to have that level of granularity and know that all of my colleagues are going to have a consistent development environment. But these shells only give you enough to do command line build. So if you want to use Emacs or you want to use Vim, that's really your choice, and we don't impose that on any of our developers, and we assume that they have some sort of text editor setup that suits their workflow. And because these have build environments, they're also not runnable environments. So they don't mention the services that's needed to run these applications. We did experiment for a bit of time using NixOps and building virtual machines kind of for each project, but I just find spinning up virtual machines is a little bit too costly, so we haven't done that. So when it comes to actually having to run this stuff, there's still a little bit of manual kind of communication and manual work that has to be done in order to make sure that these people are running the services that these binaries need. And I think there's probably some interesting future work that could be done around Nix Shell of actually spinning up containers as well and having services running. But again, I haven't had a chance to look at that just yet. So these default Linux expressions are specifications of how to actually build the software. So as I said, they're automatically generated for Haskell using Capel to Nix, and we have them handwritten for ClojureScript. And these build expressions are important because as I'm gonna show you in a moment, we use Hydra to actually do all of our builds for us. So this is what I was talking about with the handwritten ClojureScript, and this is really a bit of a hack when it comes to the nice properties of Nix. And the real hack here is this line and stool line, which is actually gonna connect to HTTP servers and pull down jars at build time. So I've given up some of the nice properties of Nix and having really deterministic builds just at the kind of benefit that this is a really easy way to get the majority of what I want out of the build system. And I have a feeling that jars are kind of meant to be immutable anyway. If you have a new version, you release a new version. Question? Yeah, so line and stool knows how to do the fancy resolution itself as well. The downside here is that because, yeah, it's pulling every single jar down every time we do a build, this adds about a minute and a half to our build process. But I think that minute and a half is still probably adding up to less time than it would be for me to actually package closed script properly. So, but a nice thing that we can do in the build expressions as well. As you can see at the top, we've got some kind of standard binary, binaries that are required to do the build, but we can also parameterize our builds over just basic literals. So here I've got a literal for widget host and widget tracking code. Widget host is the absolute URL where this service will be running. And widget tracking code is, I think, Google Analytics. And the reason we're doing this is because these are JavaScript applications that are going to be running on a client's browser. And JavaScript is kind of tricky to actually do configuration with because you don't have configuration files. We could probably generate an extra JavaScript file and kind of required at runtime, but I find it a lot easier to just generate this by compiling it directly into the source code as constants. And the way this works is I can re-export these function parameters at the top down here and that gets re-exported as environment variables which are available for line install. So when line install runs, it looks at these environment variables and basically interpolates that into the source code at build time. And again, if any of these change, then the build result will change. So we have some deterministic builds there. So to actually perform these builds, as I mentioned, we're using Hydra. And if you're not familiar with Hydra, although you may have seen it already today, it's a continuous build system that's geared specifically towards working with Nix. So all of the configuration in Hydra is done with Nix expressions and it knows how to use these default.Nix expressions that I've already written in order to do builds. Our Hydra instance is configured to pull from the private GitHub repositories and we have this single release.Nix expression which is basically defining how all of the projects will be built. And this is what our release.Nix expression looks like. So it's a function itself. We've got it parameterized over a couple of things that have to be specified at the top and then here some kind of optional parameters that you can put in. And again, these are the Google Analytics codes and things like that that get passed into the closure script builds. We also override the set of Haskell packages with our own kind of proprietary Haskell packages and also some other Haskell packages that we've had to fork in order to just kind of apply our own custom patches to. And this is using the kind of latest version of Haskell packaging and here you can see the rather insane amount of customization you can do. So Metronome is our API server and Finder and Finder Extras are our own projects and then there's things like JWT and Monadlog are new which are forks of other people's work. The body of the release.Nix expression is just an attribute set where every attribute is basically a thing that needs to be built. So being good Haskell programmers we of course wrote unit tests and then immediately disabled them because we have types. So why do we need tests? Although we'll actually see that we do have acceptance tests later. So the reasons that these are turned off are unfortunately our unit tests seem to crash on our Hydra instance not because of our code crashing because of a GHC runtime bug. So, but we do have acceptance tests anyway. So these are the main things that we're gonna be building. Here's the front end which is the Clojus script application. And you can see this inherit is basically just passing all of those function parameters down into the build. And these are all coming from our overridden set of Haskell packages just saying Metronome is the Metronome Haskell package and Finder is the Finder Haskell package and so on. We also generate a channel and I'm gonna talk about this a bit later and then there's also some stuff down here which are about running acceptance tests which I'm also gonna come back to later. So Hydra, that file because it's a function is basically a kind of template for our builds and then we can instantiate this build template in Hydra and specify what the actual values to these functions are gonna be. So that's probably a little small to see but you can basically what's going on here is there's a large amount of Git checkouts which are gonna provide those. So if we go back over here, you can see that all of these paths are in angle brackets and that means that they have to be available on the Nix include path and by using Git checkouts in Hydra it basically puts those on the include path so you can refer to them. We've also got a couple of string values here which allows me to put in the API keys for each of the builds and then we instantiate this once for our production environment and we use all of our production keys and then we also instantiate it for a staging environment where we can put in the staging keys so that we avoid using our actual production to strive instance for example. Okay, so onwards with the servers. So Finder has the benefit of being a pretty new project and also having a really small user base so we don't really require many servers at all. In fact, we're actually running on a single four gig line node server for production and we've got another almost exactly the same instance for a production failover and then we have one more server which is our staging server. So all these servers are of course running Nix-Ops and we do all of the deployment management there with Nix-Ops. In order to configure these servers because we're using Nix-Ops we basically get all the nice properties of the Nix-Ops declarative configuration files and we've embraced the module pattern that's used inside Nix-Ops. So inside Nix-Ops you have modules which encapsulate kind of logical units of functionality such as Postgres or readers and things like that. We have our own service modules that are running things such as Metronome which is the API server. The widget and the admin web applications which they actually kind of enable the Nginx module and then specify what the actual different routes inside Nginx would map to. And we're using the option system here to provide API keys for things where we can generate configuration files. So that's mainly the API server which is our Haskell application. We use the option system to generate to provide the API keys for that and then generate configuration files that those servers are going to run against. We also have some modules that are basically pre-configured services. So there's Nginx with some kind of common options such as naveling SSL and things like that. Postgres by default I think is configured to run on machines with 32 megabytes of RAM. We have a little bit more than 32 megabytes so of course we have a module there which takes full advantage of the fact that we've got four gigs of RAM and readers and Postfix and so on. And there's also these general configuration modules. The main one there is some essential kernel parameters that you need in order to run on Linode. Linode runs using KVM so you have to enable a couple of different modules just to make sure that the machine actually boots. And because we've got a couple of these machines I don't want to repeat myself so we abstract that out into a module. And the nice thing here is that our actual individual machines just come out really simply as kind of a composition of all of these modules. So this is just a snippet of our staging server. I specify the IP address where this machine is already running. I give it a host name which is just useful for when I'm SSHed in and I need to know where I am. And then the bulk of the definition is just importing all of these different modules. So there's API checks, API server and so on. There's plenty of these in the actual config. And then the rest of the file is just specifying these essential configuration options. So the secret and public key for Stripe for example and various other bits of configuration there. In terms of managing services so I said find a love system D. I guess it's really me that loves system D but I'm responsible for all of this. So I'm going to say find a love system D as well. And we make extensive use of system D throughout all of this because NixOS gives us a really nice and easy way to spin up kind of new system D services. You just say system D dot services dot whatever you want to call it and then you can specify the system D configuration right there. So we're obviously using system D to run things like the API server which have to be running 24 seven but we also use system D in what I think is a fairly interesting way to encapsulate this idea of state as a service. So the idea here is the API server obviously depends on Postgres because that's where the data is stored but it also depends on that Postgres database having a certain schema. And what we do is we have a migration binary that's encapsulated as a system D service that the API server also depends on. And this means when I start the API server system D is first going to basically check that my database schema corresponds to exactly what the binary is expecting performs a migration if necessary and then the API server starts up. And the really nice property here is I mean when you do a NixOS deployment what essentially happens is you copy a bunch of binaries over and restart the services that you need to in order to bring that system up to date. So we've managed to encapsulate schema migrations as part of the NixOS deployment process without really having to do anything particularly hard. I just specify the services and specify the dependencies and let system D take care of ordering those things appropriately. I think there's also some future work that could be done here to take even more advantage of system D which is in particular socket activation. So system D has the ability to create Unix sockets before a service is even running. It starts accepting connections on that socket or TCP socket and then hand that over to any service that requires it in order to basically speed up the boot process. But I think you could also abuse that by doing zero downtime deployments by having the socket buffer connections while we restarted the API server. So this is future work, a restart at the moment and a redeployment results in about 500 milliseconds of downtime. And given the kind of load we have I'm willing to live with that for now but I really would like to look into this at some point. And so before I look at the actual final kind of deployment process I wanted to come back to those acceptance tests. So we have these modules that specify how all of our services are running. And we can actually use these modules now to spin up one-off virtual machines which are gonna run our acceptance tests. So the reason I use virtual machines here is because acceptance tests as a principle you want to mimic your production environment as much as possible which means having Postgres running on a certain port Nginx running on another certain port and so on. And if you're trying to run these in parallel ports start to clash and you have to do a lot of resource management and it becomes very tedious. But virtual machines basically solve all of those problems by giving you that isolation right up front. So we have, well I think it's very easy if I jump over to the code at this point to show what that looks like. We have two different jobs and one of them is to build a virtual box disk image. And again I don't really have to do anything particularly fancy here because this is something that's built into NixOS as a module already. So what I do is I call evalconfig which is a function that comes in Nix packages and this is the same function that's used to run NixOS rebuild. I specify all of these modules that I need to be present in this systems configuration. One important one here is this virtual box image module. I also need the X11 running because I'm going to be running Firefox to run Selenium browser tests. And then these are those modules that I've talked about before that do things such as make sure the database is running and make sure my API server is running. And then the rest of this is just a normal NixOS configuration file. So I've configured this slightly weirdly to redirect the API host to local host because I'm just going to be doing all my testing on one machine. I run some various services. So I run Finder, I run Metronome, the API server. There's a bit of Nginx configuration noise here. And then there's a little bit of kind of essential stuff at the bottom that's just specific for the fact that I'm running on a virtual machine which is to redirect the console, redirect SystemD's journal to the console and actually make sure this thing can boot. And I also run the Selenium server in the background once the machine is booted up. The other job we have is just basically a shell script which assumes that this disk image is already built. And that assumption comes from the fact that when I mount it, I'm just able to refer to this acceptance test VBox job and Hydra is clever enough to figure out that it's going to have to build the disk image before it tries to run the tests. And the body of this script is basically just interacting with virtual box via the command line. So I create a new virtual machine. I give it a gigabyte of RAM. I attach, this is a bit messy, but I basically attach this acceptance test VBox disk image. Then I boot the machine. I wait for welcome to Nixos to be printed to the console. And at this point, I basically make the assumption that X11 is running and able to actually do things. So I just use VBox manage to fire off this browser tests executable, wait for it to finish. And if it, well, I assume that it exits successfully and then I kind of echo the content of that back out to the build process. So I thought this was really cool because I'm able to not only test that my binaries are exactly the binaries that I'm going to push to my production servers, but I'm also testing that the system configuration itself that I'm going to be running on the production servers is what I'm actually testing. Because you'll see here, like in this operating system configuration, I didn't say how to run the API server. I just said, well, we're going to import the finder or the metronome service and then we're going to run it, which is exactly what we do on production. So these acceptance tests come out very closely mimicking production, I think. Question? Yeah, and the annoying thing here is that uses QEMU. And again, we have a strange GHC runtime bug just in QEMU that's not present in virtual box. So, but this also has a nice property that, I mean, I've split this into two different jobs, one that just builds the disk image. And that means if the acceptance tests do fail, I can download this disk image, attach it to a local virtual box instance, boot it up, watch the browser actually running and hopefully at that point see where the failure is. I could probably do that with QEMU as well, but that came out very nicely with virtual box. So I think that's all of that. So on to the deployments. I mean, my main consideration when I'm doing deployments is that I want these to be fast. I'm going to ideally be deploying multiple times a day. Well, if I'm having a good day. And the slowest part of a deployment is going to be building binaries. And we've already done that. So why would I bother repeating that work? What I ideally want to do is reuse those builds that we got from Hydra. I found the best way to do this at the time was to build a channel in Hydra. Hydra does actually generate channels itself, but the channels kind of came out slightly weird. I think if one thing failed, the channel would just refer to a previous version. Whereas I'd rather if one thing failed, that there is no channel at all. So we build our own channel. And this again is kind of, it's almost a bit of a hack the way it works. So let me jump over to the full definition here in release.next file. The core of it is this really strange make fake derivation function, which takes a name and a path into the next store. And then it uses this built-in store path to provide a derivation that will exist at that path in the store. But it tells you nothing about how you would actually build this thing. So what we do in our channel is we kind of re-export all of the things we want available in our channel. And then when you actually try and do an installation, the only information you have is what the hash is. So the only thing you can do is talk to the, any of your available binary substitution servers saying, do you have this thing built because I need it installed? If they do, you can just download the binary and install it directly. If you don't have it, then the only thing you can do is abort the build because you don't know how to build it. But that's actually exactly what I want when doing deployments. That means that my deployments will only use binaries that have been built on our Hydra server, which ideally means these are only binaries that have gone through the acceptance test process as well. In order to expose that channel, we just have to do a little bit of kind of hacking around with the engine next instance that runs Hydra. So I've got two routes here that kind of, this is the format of a Nix channel. And I'm basically kind of interpolating the project name and the job name to generate a production and a staging channel. Don't worry about the details there. And to actually use this in deployments, the way I do it is, so this is at the top is a NixOps file. I specify finder packages as a function argument. And then I'm able to basically use, I assume that metronome is present in this finder packages object. So here I'm saying services.metronome package is metronome from finder packages. And in order to actually provide that function argument, I just use NixOps to do that. So here I'm modifying the production environment by adding the set of route channels to productions include path due deployments as route, which maybe is not a great idea. And then I also set the finder packages argument to just import this finder production channel. And now deployments become very easily scriptable. It's approximately the following. There's a little bit more kind of error catching going on here, but we have our own Nix packages repository on the deployment machine. So I CD into that and do a git pull. The pull here is to basically make sure I do any operating system upgrades, such as upgrading to a different Nix release version. This isn't going to change the binaries that are actually deployed in terms of the API server and things like that. In order to change that, I do Nix channel update, which is going to pull down the latest production and staging channels. And then I can do NixOps deploy dash D production, which has now been configured to know about these Nix channels, use all those binaries, pull them down from Hydra, push them out to servers and restart all the necessary system D. Services. And this whole process generally means deployments take about seven seconds the last time I looked, which for me is pretty perfect when it comes to deployments. So that's everything I really wanted to talk about. So hopefully you've seen that how we're using the kind of development environments to just get development environments, but also to have our build expressions. Those dev environments can reuse the binaries that we're building on Hydra by letting my colleagues use that as a binary substitution server. And then we've also got the module system, which allow us to do deployments, but also to reuse these modules for acceptance tests. So I've been really happy with how this has all come out and certainly better than Chef and Puppet, which was my previous experience. Thank you. Hey, so two questions, first question. Can I steal the source code please? Yeah. I mean, there's probably some stuff that maybe I have to clear it, but I would like to get a lot of this shared. I should also mention actually, I kind of cribbed all these slides from a blog post that I never got around to publishing. So hopefully in the next week or so, I'll have a kind of full write up of all of this in a bit more detail. But yeah, I'd like to share some of this. Cool, amazing. And the real question is, how do you sort of use all this stack during the development? Because during the development, the build is really slow, right? They're as slow as they kind of need to be. So we have the nice property that Nix is obviously only gonna rebuild things that have changed. So if I just work on one project, I tend to be in that development environment and I just do the minimal rebuilds that need to happen there. But in your broader question, like services and stuff, we... I'm sorry, like during your development process, do you do Nix build or do you line and install whatever? No, so yeah, for the closure script stuff, we do Nix shell on that project, which is gonna bring down all of the dependencies. Because I mean, that was actually a bit of an abbreviation. There is a separate step that just brings down dependencies, which is before the build phase. And then I just run things like line, rappel and things. For the Haskell stuff, I never run Nix build, but if I do Nix shell on a project that depends on another project that has changed, Nix will automatically build that for me and bring it into my environment. So that's kind of what I meant where Nix does the least amount of stuff possible to make it happen. Okay, question. How did you convince your coworkers to give Nix a try? Or did they have any chance even? So if we're gonna be honest, as you said before, I just said, do you wanna try Nix? And they said, yeah. So I had the benefit of working in very small teams. I mean, when we started this, we had no existing deployment infrastructure in place. This was a brand new project that we were starting from the ground up. I'd had Nix on my machine for a while and been using it for local development. And the rest of the guys were basically kind of happy enough with how I was doing my development. It seemed to be working very well for me. And we were just willing to give it a shot. And I think it paid off nicely. Hi. How long do you think it took you to set up the whole environment? So it's kind of tricky to put a timeframe on it because it wasn't really something that we did from start to finish. I mean, the first thing that I did was build these development shelves because I just wanted to be able to work on the code. And we weren't even deploying anything at that point because we were still in a development phase. So, I mean, the acceptance test stuff took me a reasonable amount of time, but I did that far later than the other stuff that was in there. Setting up Hydra, as someone mentioned before, like it's not particularly easy. So there was certainly some time sunk into just getting Hydra working and finding a commit that worked, which was a bit annoying. I guess in the grand scheme of things, it doesn't feel like it was a massive time. So I think it definitely did take some time, which is why I'd like to document some of this stuff so people don't have to go through that kind of research process again. Okay. Kind of along those lines, thinking back, if you were working in an organization that's already been running for quite a bit of time and you wanted to introduce some of this stuff, how do you think you might go about that? So I guess that's the position I'm in right now. So I've just joined Circuit Hub in the last couple of weeks. The first thing I did there was to just add these default.nix and shell.nix expressions to all of the projects, which at least means I can now work on that code and use Nix to build my development environments. The next thing I wanna do is get a Hydra instance set up because my colleagues there are also familiar with Nix and they'd like to see how well it plays out in practice. So what I wanna be able to do is have my own Hydra instance running and then I can tell them, oh yeah, just go into this project, run git pull and run Nix shell with my binary cache and you should be in a development environment in like a minute or something. And hopefully at this point, things start to become quite convincing that there is value in these tools. As for the actual deployments, I guess that's just gonna have to be a bigger discussion because that's quite a big buy-in at that point. You mentioned before that you want to have something like Nix shell for containers. There is a small wrapper around Nix-OS container which is called Nix-OS shell. Okay. And go and... Okay, I'm not familiar with that. So I definitely have to look into that. It doesn't feel that the moment, but it shouldn't be too hard to get it to build. Yeah. So I thought it was interesting that you mentioned the building closure script using Nix. Closure script, closure uses the Maven ecosystem for its dependency tracking. So do you feel that Nix can contribute something to projects that use Maven as the main dependency resolution system? Or do you think that they are sort of orthogonal to each other? Contributing back to that ecosystem, I'm not so sure about. I mean, I did briefly look into actually bringing some of that packaging kind of into Nix and teaching Nix a bit more about how it all fits together. And it definitely looks doable. I mean, it looks like it's the kind of classic story of having a bunch of stuff on environment variables that specify the paths that you need to include. So I think what I'd rather do is try and kind of bring some of that, ideally in an automated way into Nix packages. Again, the same kind of idea is how we do it with Hackage of just basically mirroring like closures, which is one of the main package repositories. I think then maybe at that point, then we have a convincing story to bring some more developers in saying, hey, we already have your entire ecosystem. Maybe you want to try our tool out and see if it helps you. But again, that's just a time thing really. And I'm now not writing Closurescript anymore, so I guess it won't be me. And any more questions? Okay, if you can still think of a question in a minute or so, Oliver is around, I think, right? So thank you very much for the nice talk. Thank you.