 OK, and we're back. So you probably all know Rob. If nothing else, you ask him to clear the storage on Hydra. It's not the only thing he does. And actually, he'll tell us everything about what he does. Well, thank you. I hope you guys can hear me. I'll try to keep you entertained for about half an hour or 40 minutes until we go for lunch. And then, of course, that's the best part of the day. So I'm Rob from Mass. I work for a company called Logic Blocks. And you might have heard the name around on the IRC channel or start on the wiki. So this is my contact information. If you ever want to talk to me, I'm most of the time on IRC on most late times as well. So we're there to help. I mean, just like the whole Knickswes community, they're awesome and very helpful, usually on IRC. So I would like to tell you a bit about who I am and what I did with Knicks. So I kind of looked back at my commits. And I found the first commit in 2004 when I added Octave. That was because I used to be a student at the University of Utrecht, which is also the place where Knicks was developed originally. So Ilke Dossai was one of my fellow students. I mean, he was a bit older, so he finished before me. But after my studies, I went to work there. And I was working on a project called Certigo XT, which made like a language to build compilers. And that consisted of like 20 auto comp packages. And of course, to build that, that's terrible. So what do we do is we actually started using the work that Ilke was working on. And Knicks actually built and actually sent to the users this project. So that was nice. Then we actually moved to the TU Delft, where I actually worked on HYDRA and a bit on KnicksOps. And I've also contributed to Knicks. Like Rock explained, I also do a lot of infrastructure support for, let's say, the infrastructure that we have. So HYDRA, the website. I do that together with Ilke Dostra. Sometimes Sander is helping. And there are KnicksOS community members who help with that as well, but basically more on the web drive, not on the back end of the servers. Also, this year, we started the KnicksOS foundation, which meant the support to give a kind of a vehicle for people to give us donations so that we can keep all this infrastructure running. So I would like to encourage you to give us all your money so we can do all kinds of awesome stuff. So as I explained, I work for LogicBlocks. We develop a state-of-the-art database system that is used to develop applications for many top retailers and banks. So that's the marketing statement. I'm not going to tell you too much about LogicBlocks because it's actually not too interesting for you guys because it's not necessarily like an open product that you can actually use. You have to work together with us. But what is important is that we have a few KnicksOS. So first of all, of course, me. I'm doing the presentation. But also Ilke Dostra, I don't know where he is. There, everybody knows Ilke. Our CTO is Martin Braveboer. You probably haven't seen many commits of him lately because I think his last commit was in 2008. But he was actually using Knicks very early on as well. At some point, Shay Levy is also a good community member. He works for us and we sometimes hire contractors to do work for us that we cannot do ourselves because of capacity reasons. So Yevgeny has worked for us and many will know him as freedom on IRC. So what I usually do when I have a talk about Knicks, I would like to remind myself why I use Knicks because I think it's very important to kind of reflect over why you do things. So my first point is always, Knicks protects me against me. So I make software and I break all kinds of shit all the time. So I'm very good at making bugs. So I like the feature that I can just roll back because that kind of gives a safety net that we actually need. Well, that I actually need. You, of course, are like perfect programmers and never make bugs. But also, I'm very forgetful. So unless I make something that's completely automated, I forget how to do things. And what Knicks does, it actually gives me a vehicle to do things consistently multiple times and on different machines. And that takes away a lot of the pain that I have with software development. And I'm sure you guys have that as well. What it also does is it exposes the things I forget like implicit dependencies. I mean, we all know when you start a project that's existing, you make a Knicks expression for it. Oh, man, there are so many implicit dependencies and so many assumptions being made. So Knicks exposes what I forget to actually declare. And therefore, it helps, let's say, the previous things that I wrote. So another thing is that over the course of time, we actually have developed all kinds of tools like Knicks, KnicksOS, Hydra, and KnicksOps. And now we have just one language to rule them all, like it says on the Wi-Fi password. So that's what I like. And I hope you guys like it as well. So my presentation is basically a call out to make sure that you guys start Knicksifying companies. So this will be a crash course in how to actually do this. So there's a free-step plan. You can find a nice company, which I did with Logic Blocks. I'm sure you guys have nice companies as well. You apply Knicks wherever you can, because people and companies will have problems that need to be solved. And they can typically be solved very nicely with Knicks, so just do it. And so for you, we have profit or salary or whatever kind of your system is. So that's basically my free-step plan. So when you build software, you build software. You want to make sure that you run tests and you want to have it run somewhere. So we have four components that I call the big four. Sunder will probably say, no, you have to have five, because this Knicks is also part of that. But in my case, we don't use this Knicks yet. So it's the big four. So first of all, Knicks, which covers this whole pipeline. Of course, Hydro, we use for building and testing. And KnicksOps and Knicks West actually get it deployed on systems. So you get to a company that exists already, I think in the case of Logitech, it existed for 10 years already. So there's a lot of software there. And what do you get? Bash scripts, something called Jenkins. I don't know if you've worked with it. It's terrible if they maintain machines, all kinds of stuff. And of course, people want to use enterprise-y operating systems like CentOS or Red Hat Linux, which I get the shivers from. So of course, I want to change everything, but you can't change everything at the same time. So what you just as a first step do, because you can't immediately deploy it with Knicks or KnicksOps, you basically make sure that you do your builds first step in the whole pipeline. But man, closed-source software. I mean, I'm a really big fan of open-source software, partly because people actually designed it to be shared with other people. So they actually make sure they keep to certain build systems, standard build systems. So then relatively easy to build open-source software, even though, of course, there are shitty parts sometimes as well. But in closed-source software, people use build scripts, hard-coded locations, binary files. Who knows where they come from? Like library SO files. Someone once built them, put them in a repository, and you actually have to use them. Also like huge builds, a typical package in Knicks packages, probably like a few megabytes. Man, I think my first build of logic blocks was about 2.5 gigabytes. So the case also comes with troubles. But also, let's just download something from the network. It's a typical problem we have when we package in Knicks, right? Oh, man, or even worse, not just using one language. Use six languages. So it's all kind of nice. If you have one MPM package, then that's it. But oh, man, imagine using Java Scala, C++, whatever language they can think of. That was best to suit their purpose at the same time. So OK, you've Knicksified your build. And I must say it was very ugly at the beginning. It was a very big script hacking around all these kinds of hard-coded locations. But of course, you need to run it continuously. So that's why we have Hydra. So you'll know this screen, at least I hope you do, because it's kind of the main server in the whole Knicks sphere. So why I like Hydra? Again, most of the advantages here actually caused by Knicks. So it's basically a generalized Knicks run, in my opinion. Kind of like a Knicks scheduler, in a sense. And all bills are consolidated in one language, Knicks. And in my opinion, that's really nice, because I hate Bash. And not because you cannot do good things in Bash, but it seems to be that all programmers always forget everything they learned about programming in Bash. I don't know why it is. And probably I do the same. So it's not like I'm blaming others. I'm just as bad. And I need some guidance. And Knicks gives me this guidance, basically. So less maintenance, in a sense that you don't have to maintain computers. Like we had this Jenkins machine, that one version had to have GCC42 and one GCC, some other version. Also, I want to be able to reproduce the build, have caching of builds, which is very important when you want to compose builds. And I want to integrate it with our deployment tools. So to you who have never installed Hydra, we might call some things. And you think, what the hell is a job set? Or what is a job? Or what is a build? So in Hydra, you can actually group builds together in projects. So for example, at Logitech Blocks, we have for each of our clients, we have a project for our platform. And then we have a concept of job sets. It's basically you could see it maybe as you want to build a branch of your product or a pull request. And then you create a job set. And the result of a job set is actually a set of builds that can be multiple things. So the features that are important for us and that we use a lot is the fact that it's so easy in Hydra to create clones of job sets and build branches. But also, the ability to compose multiple builds is one logical build. So let's see. In our Hydra, we have 19 projects and about 205 active job sets. So that's actually a lot. So what do we build? We build our platform. So I think that we build a database platform. And this is our main builds for things. So I had to kind of rotate it to make sure that it is fitted on the screen so if everybody can move over. And then you can see that we have a lot of builds. And that's not because our platform might be just 50 of them, which is probably like a third of it. But we have a lot of things that we also trigger, things like benchmarks or tests or things related to deployment or you want to try to build client application. So we have a few different types of builds. And that's for a reason that our builds are getting so huge in the time of build time that they actually consume that we cannot commit, basically build on every commit. But we actually do that. But we just use a smaller subset of builds. So for example, if somebody commits something in our platform, that triggers about 90 builds, which is kind of like three hours worth of builds if you would do it in sequence. We also have nightly integration builds, where we also build our client applications with it to validate if, let's say, a change in the platform breaks something in those applications. And then it gets a bit harder, because each commit will actually cost 300 builds and 120 hours of builds worth. So let's say if we do that on every commit, we would need a lot of build machines. We have a lot of them, but not that many. We're not Google or anything. So we're building client applications. And how does that typically look like? So typically, they have two inputs. It's basically the platform release, which is an argument to specify which release we actually want to use, the source code of the implementation of the application. And it typically looks like a function call that calls like a generic app job set, the function that builds a set of jobs based on how to build the application. So there's a built LB config. We have a kind of like a standardized build system for LB applications that we can basically pass in logic blocks, which is the database, and blocks for which is our application server. So in Hydro, that will look like this. You should look familiar for you guys. There are like four builds that come out of this function. One is the actual build of the application, including the running of the test. But also, you can actually download a tarbled version of what comes out of it so that the developers can just download it instead of, let's say, building it for a few hours on their own machines. And we do some basic testing that is generic for each application. And applications can actually add their own tests and that will show up as extra job sets. So another thing that's important is the Garon closure. So probably not everybody knows, but Nixops was once called Garon when we wrote our first paper about it. But nobody liked that name, so we chose the boring Nixops name. So Ollie Charles, he explained how they are using channels to do deployments. We actually do something similar. I don't make it a channel, but we basically create a composite build of all the dependencies needed for deployments. So what you see here is a store path, which is the Garon closure, which contains everything we need for deployments. So that's the LB application, also software dependencies, like the platform or the application server, the operating system, Nix packages, some generic system configuration files, how to install LB monitoring stuff, and the machine definitions, which are part of the source code of the application. So basically, if we want to deploy, we can just give it this path, put the Nix path correctly, and it'll find everything you need for the deployment. And basically, we'll just run. So I thought it was very interesting that Ollie Charles actually did something similar at Finder. It's good to know that people have similar ideas, basically. So I explained that. Of course, I started building stuff with Nix, and we couldn't use it for the whole thing, like the deployments. And of course, they want to use CentOS, hate CentOS. But the nice thing about CentOS is that one thing that I hate more about CentOS than CentOS is the fact that people build CentOS images manually. Because building things manually, of course, that's the evil. So what they actually did is I actually automated the CentOS image generation so that at least we had a declarative specification what is in our CentOS image instead of somebody actually making the image, changing some stuff, then cleaning out the SSH demon public key. I know all the kinds of things that it's needed to actually fix all kinds of impure stuff when you create something like that. And one thing that's really important about this is that Nix packages has a lot of weird functions in them. So one of them is the run in Linux VM, where you can basically give it a Nix build and it will just run in a VM magically. So I think it starts up some key move stuff. Don't really know the specifics. But it just works. So if you ever need a build that actually needs root, for example, to create a file system or something like that, you can actually just use this function. And it's awesome, in my opinion. So it's also used for easy to do and the images for Google Cloud and Versalbox. So there's a reference there. So make sure to check it out, because I think this is one of these things that you don't really necessarily think of that exists, but it's undocumented, but still super awesome. So we actually use that also to test our platform. So we actually provide a binary release to our clients and to our developers. Of course, they're using Ubuntu and Fedora. But I don't want to have Fedora and Ubuntu machines in my build form, because I have to manage them and install them, and maybe I have to use Puppet or something like that. So we just do the Nix, because in Nix, you can actually build these kind of things declaratively. So here you see an example of, well, you see the basic dependencies that LBS. So we need to have Bash installs, we need to have Java, Python, and some proc.ps. That's probably for the PS tool or something like that. And we have a function called test download package, which basically does the equivalent of downloading the package, unpacking it, running some basic tests on it. And this is all done inside a Nix build. So it integrates into Hydra like automatically. So it's really nice. I mean, there's support for CentOS, Fedora, Ubuntu, Debian, whatever. Mm-hmm. Oh, really? I haven't used that yet. I always say we need to build everything. So also we build our documentation automatically, because doing things manually is super evil, and things will get not synchronized. So our documentation is typically built continuously, and every night there's actually a deployment going to our website to make sure that our clients have always up-to-date documentation. So one thing that we have been using Hydra a lot for, and mostly since this year, because we started a designated benchmark team, which is very important for database, because you kind of want to know when you introduce a performance regression. So we introduced a lot of benchmarking in our builds. The benchmarking is kind of tricky when you have a build farm, because you kind of want to run in a consistent environment. So what you cannot do is to build at the same time, because it will actually influence things, because there will be IO use by build processes, or CPU use. So you need to make sure that these kind of builds can run in the same consistent way. So how we have solved that is basically having a subset of our build farm in Hydra that only allows one build at a time. But the Hydra scheduler actually was kind of dumb. I mean, I also explained that yesterday. And that caused a lot of contention on the build farm, where the normal builds actually wouldn't go through, because all the benchmark builds were waiting for a build slot. So that's also one of the reasons why we improved our scheduler in the Hydra Q runner, so that we can actually run these mass scale benchmarks on there as well. So when you do benchmarking, there are a few things that you actually want to take into consideration. So the data set that goes in, so you can have data sets from 1 gigabyte, 10 gigabytes, 100 gigabytes, but also things like the software, which platform, which version, which type of benchmark you actually want to run, but also what type of CPU you want to run it on, how much memory, how much storage, what type of storage. And these kind of combinations are really nice to program in Nix, because we have these language and the tools to actually describe all of these things. So we have NixOS for the system, and Nix for the versions. Data set that's basically just defined as a list. But also NixOps to actually deploy new instances on EC2 for machines that we could normally not really buy ourselves. So we have benchmarks that go against a terabyte of data, but we don't have a built machine to actually fit that on. So what Ilco developed was the Hydra EC2 Provisioner, where given a certain type of feature that we need, like memory or storage, we actually deploy a new machine that actually will be built. And it's really nice, because now we can actually run benchmarks on really big job sets, which you couldn't do before. So I started at Logic Box in 2011, and we started out with three Linux machines. At some point, we got a Mac. But I was located in Utrecht, and our office is in Atlanta. So you get the problem like you buy hardware, and hardware dies. I hate hardware. So we actually needed IT people to actually respond quickly. And of course, like any company, our IT department has a lot of other stuff to do, so the response time was pretty slow. So after about two years, we actually decided to run all our build farm outside of our on-premise network. And we actually moved to Hetschner. And I mean, Aslich, he made the Hetschner back end. And I'm so thankful for him that he did, because it's awesome. And it has saved us so much work. And nowadays, we have 21 machines at Hetschner, but also two Mac OSX machines. Not at Hetschner, because they don't do that. But there's an American company called MacStadium, where you can basically hire, or it's not hire, rent the Mac hardware. And basically, you pay some money for the hosting. So we have also kind of, it's not cloud, but they deliver usually within two hours if you need extra capacity. So it's actually pretty awesome. And it's much cheaper than ease too, because of course, we could actually run 24.7 on the EC2. But then our build would go skyrocket, and it's already pretty high. So that's probably not a good idea. Kind of to show you how our, so this is the beginning when I joined Logic Blocks. This is a graph that shows the number of builds. So our hydra is called Bob the Builder. And so if you have children, you know what it's about. And you see there's a big steep, like in the first year, we went from zero to, well, I don't know, 30,000 a month. Oh yeah, so it's per month. It's good to know. So currently, we do about 1,200 builds a day with hydra. But more importantly, I told you that we do benchmark builds a lot more since this year. You can see the spike there, because these benchmarks typically run really long, especially when you have these longer jobs. So I think it tripled the time that we spent in the number of build hours, so if we would have run it sequentially. So OK, now I've got all the builds. The builds farm, we're testing everything. So it's really nice. Now we need to start deploying. So for you guys who might not know exactly what Nixops is, it's basically a tool to deploy a set of NixOS machines. And that can be to anything. It can be to the cloud, or it can be to on-premise machines. It's based on Nix. It uses the Nix language to actually describe it, actually uses the NixOS module system as well. So I think it has an expressive configuration language, which is nice. It's also nice and composable. And it has kind of a separation, at least it allows you to have a separation of logical and physical aspects of the deployment. So what that means is that, in one case, we actually want to describe what is deployed to a system. I want to have a running LB instance, or like a database running, or I want to have a web server running. And on the other side, you actually want to describe where you want to deploy it to. So that's simply I want to deploy it to EC2 on this instance size, or I want to deploy it to virtual blocks. So that's useful for a lot of things like development, but also for, let's say, if a client needs to go to a different cloud provider, you actually don't want to do too much of changes. So that's how it looks like in a picture. So the typical example of NixOps looks like this. So you have this functional, so I want to have the NixOS home page. And I want to deploy it to EC2. Well, it turns out when you do a lot of deployments, what will happen is people will start copying this part, make a copy of it, change it. And of course, code cloning results in all kinds of inconsistent deployments. So what we have done a lot lately is because a lot of these client applications that we're deploying, we actually want all kinds of different versions running, some with a smaller data set, some with a bigger data set. But you also have the variability between production and development environments. So what our deployments typically look like nowadays is it has actually arguments. So it's actually a function which captures these variability. So things like which AWS account to deploy to, which region, which instance type, do you want to have the batch enabled? Because sometimes, let's say during development, you don't want the batches to trigger. But also, is it a production system? So we are a bit stricter with our production system. So I want to make sure that all the monitoring is set up correctly. So these kind of things you can cover with these arguments. And I think that's a very nice thing because it kind of declares what your variability in the different deployments are without needing to copy the files around. So do we use Nixops? Oh, yeah. We have networks and clusters of the 54 machines in one functional unit. So you can imagine, Database wants to do replication, all kinds of clustering. So our biggest network is 54 machines. So I went to dig into all the logs to see how many times people did what. So there were 100 Nixops create calls being done, 88 delete. So that basically means there were 100 new deployments and 88 thrown away. 1700 times people called Nixops deploy, which basically mean there can be anything from, let's say, configuration change to a full-scale new deployment. And 300 environments have been destroyed. And basically, you kill the machines then. But it can be that, for example, you want to redeploy it from scratch again. And this is being done by 40 different people in the company. So to give you an idea about how many servers we run with NixOS and Nixops, we have a lot of internal infrastructure. You think like the BILDFARM runs on, like on Hasner. But we also have some local servers on-premise. But EC2 on Google Cloud, we have about 50 of them. For our clients, we have a whole lot more. Last I checked, like three days ago, it was 550 machines that we deployed for clients. So that includes big networks that have 50 machines, right? So that doesn't mean we have 500 deployments. But at least, yeah, there's quite a lot of machines. So most of our deployments are kind of like static deployments. So this has a fixed number of machines. But of course, when you use the cloud, you want to do all kinds of dynamic scaling. So we have a system to offload some work for applications, but also for developers, where they can send some work to be done. And that is actually running continuously. And it has, at any time of the week, it has either from zero to 1,500 machines running concurrently. So when Ilko was showing that graph of the unique IP addresses, that was probably me, or at least part of it. So maybe I'll talk a bit about how we actually arrange all these deployments, because there's 40 people actually doing it. We have different clients, and we don't want everybody to have access to all the systems. So you want to make sure you kind of separate people and groups from certain deployments. Also, like operations, they want to have full control of the production environment, so we can never touch them. So basically, we have two deployment machines. There are actually three, but two main ones, one for development and one for production, where we have for each client a different user account, which they can sudo into, at least if you have the rights to go to a client's system and deploy there. So that's how most people work. But there are some issues with that, because, I mean, go tell a salesperson to SSH into a machine and then run some kind of script. It's kind of scary for them. So they don't really like that so much. But there's also another problem. So in NixOps, we actually store the credentials for the cloud that we're using in the home directory of the user where you're running NixOps. So they call that, for example, for EC2, it's the EC2 keys file in the home directory. That basically means that anybody who has access to the user account will actually be able to just grab these credentials. And that's, of course, a bad situation. So we actually want to make sure that that gets a bit improved. But even worse is we have this nice language to describe our deployments. And you think that we have super consistent deployments. But what actually happens is that when people are inconsistent in, for example, how they check out files, which files they use, or suddenly they share a checkout of a repository between two deployments, they actually start to interact. Because people are not disciplined enough to, let's say, use the correct processes for it. So that basically means that you have a nice language to actually deploy your stuff, but you can still get incorrect and inconsistent deployments. So a second thing is I showed some statistics about how many people were deploying. It's actually kind of hard to figure it out. So NixOps actually logs every command that you're running to the Syslog and also logs who is actually doing it. So it is actually possible, but it's still not very easy to do. So we are, well, actually we, one of my colleagues, Osama, from Tunis, he's going to work on the NixOps dashboard, which will be a web UI for NixOps, which will make it easier for, let's say, our salespeople to spin up environments, but also mitigate a few of the problems that we had with NixOps. It will be open source. I'm sure that, let's say, in the next few months, it will just show up and we'll announce it once we have a usable version. I hope you guys will start using it or at least try it. So again, we want to have a bit improved security by actually not allowing people direct access to the AWS APIs. We want to have deeper operational visibility so that we can see what's going on, because currently we have 40 people going somewhere. And for example, when you deploy something to the same deployment as two users, the user just gets like, yeah, there's a lock on it, but doesn't know what the other user is actually doing. So it will be actually nice if you just, in the web interface, it says, like, no, these operations are actually running. And this user is doing that. And very important, we want to have a proper audit trail. So by redesigning this, we want to make sure that that's all covered. Well, and again, this consistency issue, things like checking out source codes for the NICs expressions to deploy. We want to make sure that that's also structured. And basically, by providing an API slash web user interface, that will actually improve as well, partly because we'll just basically allow only changes that were made in source control to be used. So to summarize, I think I was a bit quick, maybe, but that's OK. Kind of lunch. We want to have reproducible, composable builds with traceability. So that's very important for me. I really want to know what's going on in the system when there is a problem. We want to know what changed for which reason. And we want to be able to actually efficiently look that up, which I think NICs and Hydra give us that opportunity. So we want to have reproducible system configuration, no more changing files on the servers like we were doing before. And also making sure that we don't click in the AWS console, all kinds of infrastructure together in a manual way. We need to do automation of provisioning and creating these reproducible networks as well. So I want to say it's like NICs helps us a lot. The fact that you guys are here, so many people, it's awesome. And the NICsOS community has given us a lot of things, like, for example, the heads in our back end of NICsOps. So that saves us so much time. So LogicBlocks is also thankful for that. And that's why we try to give back at least changes to NICs, NICs packages, Hydra, and NICsOps back to the community. But they also support our infrastructure. So the fact that we have such a reliable binary cache nowadays that's fully being paid for by LogicBlocks. So it's hosted on the EC2, on the S3 with CloudFront. And then make sure that we have reliable caches and make sure that people who deploy over the world actually have a decent speed, which was kind of an issue before when we were just hosted in the data center of TU Delft. So I'm very happy that they're so supportive of all this. So this is my last slide. Thank you for listening. And I hope you guys have some questions. OK, we have a bit of time for questions. I'm curious if you ever experimented with having continuous integration or anything, so taking outputs from Hydra and then deploying them immediately with NICsOps? Yeah, so we actually do that. So let me go back to one of the slides, because I probably didn't mention it clearly enough. And I kind of skipped over it. So this is an expression that shows how to build a client application. And so sometimes, usually we work with fixed releases. So every month, we do a release that is, let's say, double checked also manually, except for the automated part. So you can actually pass it like a version identifier, so it's described there in the default value on top. But what we actually also do, what actually happens a lot, is that people build their applications against the integration build, so that's, for example, a nightly build or, let's say, the continuous build. And they can basically just configure that by in Hydra changing the inputs from, let's say, not a string, but to previous Hydra builds. And people point to the correct build. That will be automatically used. Application will be built, and that's immediately deployable in NICsOps. So yes. And then I just had a follow-up question. Is do you see any way that NICsOps can help with orchestration? Like, is that ever an issue? Like, right now you are defining all of your services and deploying them with NICsOps, but maybe there's some binding between which machines get which and all this. But I'm wondering if you see opportunity there to say, to not have to specify that and do some smart allocation of where the application's running. I think that's a bit more in the space of this NICs, who's actually doing that. NICsOps is really about machine configuration. And I don't think that's necessarily a good idea to implement, let's say, such an orchestration in NICsOps itself. I think it's a really good idea to look into the work that Sander had been doing on this NICs, because that's actually meant specifically designed for such a scenario. Thank you. So could you go forward to the test download slide? Yeah, so what is that actually downloading? Is that waiting for a commit to happen, triggering a Hydra build, and then immediately uploading something and then trying to download it again? Or is that OK? Well, so actually it doesn't really do the downloading in a sense that it will have it available, but it will actually do the unpacking. So the function name might be a misnomer, because it's not actually going to the network. It's basically testing what comes out of, let's say, our integration build or continuous build and extracting that and doing basic tests on, well, in this case, Fedora. OK, have you considered having a Hydra that does like an upload to a HTTP server, just to kind of know that you've got that full flow working? Yeah, it's kind of tricky, because it's such an impure thing, so we try to avoid doing that. And I mean, you have typical, basically, checks, maybe with Pingdum to see if you can actually download it, so there's not necessarily that much of a need for it. I have a question about Hydra. Have you had any thoughts of making it more declarative, like declared the projects in Nix or in some? Yeah, we've considered it, but we never got around to actually implementing it. But it would actually be nice, because then you can track changes over time. So currently, Hydra doesn't check, let's say, the job level specification of the inputs over time, but it does track, let's say, the individual revisions that have been used. So it would indeed be a nice feature to add to Hydra to be more declarative. But there's a downside to it, of course, in the sense of usability. So for example, it might be harder for, let's say, our consultants to actually make a quick change in the user interface. It will be harder, let's say, if they need to do it in code. So that's kind of a trade-off. There's one there. Yeah, you mentioned dealing with your built or your deploy users and all that and everything that goes on with there. We have similar issues. And I think we run Ansible and stuff like that. But the same problem exists, keys, SSHPs, et cetera, are just right there for the taking of users in there. If you thought about how you do that, I mean, it sounds like the web UI helps, because that pushes you back away from the machine. But I mean, I don't want to use a web UI. I want to use my command line. So I'm wondering if you had thought about any ideas on the deal with that. So the web UI, it will actually be an API. So it will actually, you'll be able to develop tools to actually call there as well. But then you actually still have to say, the issue is still there, right? So we'll have to look into solving the credentials, not just by the UI, because there's just one layer, but maybe by some other key management system, like, I don't know, these guys from Hashi Corp have some interesting stuff. Yeah. Hey, it's not a question, more of a remark, like, that maybe before being very ambitious with like, nixifying job sets, we can do hydro CLI properly. And from there, we can sort of like, be more declarative, I guess, at, you know, without doing a drastic step to refactor everything. Yeah, I'm not sure if that's a good idea. I think it would be better to actually focus on making it more declarative, because in CLI you'll still have the same issue as web interface, basically just a different vehicle to do this imperative changing of your, yeah. Yeah, maybe. We could consider it. But if you feel like you, if you ever want to contribute to Hydra, feel free to.