 One two one two Cool Let's start sitting down. We're gonna get started Look at us improving day over day. No projector problems. It's pretty exciting Kinda so show of hands who Came to karaoke last night We're gonna show pictures and create FOMO for 2025 because you're not gonna get this for a full year another quick show of hands Who came to scale For nix con Holy shit, wait a second if so so I'm sorry Elon asked us to about this because he wants to know we need to take a picture of that so Thank you Next year, we're gonna be doing nix con over there and we'll let scale co-locate with us I'm kidding. They were lovely. All right So welcome to the second day of nix con North America for the very first time this day is gonna be mix of Talks from a lot of awesome people and an unconference So we're gonna be testing this out obviously like we've been testing out everything that we've been doing up until right now Unconference if folks are not familiar with it kind of like how Zach around lightning talks yesterday You're all gonna come up. Give a topic that you want to talk about Zach's gonna do some AI brain thinking in his mind about how to allocate different tables and rooms and areas for you To guys to have these discussions and then we're gonna split off and do that Which I think is pretty exciting. It actually creates a lot of good conversation So thank you all for making it on time. I'm sure there's gonna be a bunch of people that are late But that's reasonable because a lot of people had fun yesterday until 2 a.m. Doing karaoke and With that, I'm gonna hand it over to Zach Yeah, I don't know about 2 a.m. But we had fun Yeah, so Michael Stonkey legend Camille legend Ron got serenaded But didn't sing anything because he chickened out Yeah Yeah, so I'm not sure I have anything else to really add I'm excited for the talks today Our first speaker is Z With that, I think we'll go ahead and get started in Yeah, maybe I'm wrong Can you go back to the session eyes thing? Yeah, so this QR code is for session eyes This is where the whole schedule is so you can see which talks are coming up yada yada yada You can favorite the ones that you absolutely don't want to miss that kind of thing Yeah, so that is our portal for the whole event So that's definitely useful to have around Yeah, you can move on now It helps us first of all get a good understanding of who came in here what went really well and then Secondly, what the hell to do next year, you know karaoke wise only positive comments I don't want to hear anything but that Everything else feel free to be yourself. That would be great and with that we're gonna kick off All right one sec Quick show of hands. I don't know if there's slightly different people coming in for today who showed up today But didn't come yesterday just out of curiosity Okay, that's really interesting who hasn't who has only been using nicks not at all or in the past year Whoo, I love that. Wow Thank you and Zax has a final note Yeah, so somebody we found a pair of sunglasses at karaoke last night. So if you forgot sunglasses Like at 9 p.m. At a karaoke bar Come find me otherwise I will donate them to a person in need Cool. All right. Have a good day, too. And Yeah, let's bring up our first speaker. All right, I think we're in business So Hi, I'm Z. Yasso and today I'm gonna get to talk to you about one of my favorite things nicks nicks is many things But my super hot take is that nicks is a better docker image builder than docker's image builder as Many of you know or have come to known Nicks is a tool that makes it easy to build packages based on descriptions using its little domain specific language For reasons, which are an exercise to you the listener. It is also called nicks this Knicks package can be just about anything But usually you'll see it used to build custom software packages your own Python tooling and OS hard drive image or docker images If you've never used it before nicks is gonna seem a little Weird it's gonna feel like you're doing a lot of work up front and at some level it is when you're using nicks You're actually doing a lot of the work up front that you would do in the future anyways I'll get into more detail as this talk goes on But who is this person you might ask I am Z. Yasso I am the senior techno philosopher at fly.io where I do developer relations My friends and loved ones can attest that I have a slight tendency to blog a bit much And I've been using nicks and nicks OS across all my personal projects for like four years I live in Ottawa over with my husband It's all the morning and I know we're waiting for that precious bean fluid to kick in got that my americano right there Let's get the blood pumping with a little exercise if you recognize my blog before can you raise your hand? Oh Wow, that is way more than I thought okay Raise your hand if you don't know what a nicks is Okay Raise your hand if you'd be willing to accuse yourself of being a nicks OS or nicks or nicks OS expert Yeah, it's just about what I expected and all right you can lower your hands now So just to set expectations this talk is a little bit more introductory that you might think there's a mixed audience here There's some hardcore users that some people have never heard of this before and there's a bunch of people in the middle So I want this talk to be a bridge between the you know complete beginner and the super expert so that we can understand What nicks is why you should care and where it is needed the most if you're like a super expert Just keep in mind that this is going to show you how bad the state of the world is and where our powers are needed the most Today I'm going to cover what nicks is why I think it's better than docker for making docker images and some of the neat second-order properties of nicks that make it so much more efficient in the long run So I said that nicks is a package manager, right? Well, it's it's kind of a bit more. It's a package manager a language in an operating system And it's kind of a weird balance because it was named by computer scientists, so they named it all dicks But you can use this handy diagram inspired by the holy trinity to split the differences You use nicks the language to make nicks the package manager build packages But you these packages can be anything from individual bits of software configuration files to nicks OS images of Course this is compounded by the difficulty of adopting nicks at work If you have anything but a brand new startup or a home lab where you're willing to tear everything down and reinvent it from scratch Nicks is really different than what most developers expect and it makes it difficult to cram it into your existing CI CD pipeline I Really don't think that this is sustainable and I'm afraid if there's not a bridge like this nicks will wilt and die Do it due to a lack of adoption in the industry? So I want to be that bridge today, and I want to show you to take advantage of nicks somewhere where it's desperately needed Docker images so to say that Docker one would be One of the understatements of all time my career started just about the time that Docker left public beta and you had to recompile your kernel in order to use it and Docker containerization has gone from super niche to so widespread that I'd say it's the de facto universal package format for the Internet Modern platforms like fly.io railway or render They don't even bother doing anything but letting people run Docker images because it just works out better in the long run for everyone and This gives people a lot of infrastructure superpowers the advantages literally make the thing sell itself It's popular for a reason it solves real-world problems So elegantly compared to having to argue with your SRE or Sysadmin team to get your packages updated in your local fork of Ubuntu because you want to use a library That's more than two. That's less than two years old however With Docker out of the box. There's just one fatal flaw Docker builds are not deterministic like not even slightly Sure, if you go to the internet you pull an average docker file you pull it you build it. It'll probably work 99.99% of the time That last point oh one percent is the where the real issues come into play Speaking as a wielder a former wielder of the SRE shouting pager That last point oh one always comes into the factor at 4 a.m. Never while you're at work always exactly at 4 a.m Ask me how I know One of the biggest problems that doesn't sound like a problem at first is the fact that docker builds have access to the public internet This is needed to download packages from the Ubuntu repositories But it also means that when you're trying to go back and recreate an image at a later date You can't just ask Ubuntu to roll back the repositories to what they were on that day it's kind of annoying and Oh remember Ubuntu 18.04 it's going out of support this year You're gonna have a flag day finding out what depends on that version of Ubuntu with some CI pipeline breaks have fun Even more fun if you add packages to a docker image the naive way you get a lot of wasted space If you run apt-get upgrade at the beginning of your docker build Congratulations, you've just wasted several dozens of megabytes Those extra files of the old versions will be shadow copies and wasted space which will add up over time Especially with AWS and whatever charging per millibit of disk space and network access or whatever so if fetching things from the internet is kind of hacky and All of this other stuff What if we had the ability to just know what we needed ahead of time so we could already pull it have it cached and get it Cached a lot more adorably worldwide What if your builds didn't need an internet connection because everything was already Downloaded and in the path for you before your build even started This is the real advantage of nix when it comes to when you compare it to docker builds nix lets you know Exactly what you're depending on ahead of time and then can break it into as few docker layers as possible This means that pushing updates to your program means that only the things that actually changed have changed You don't need to upload the entire Python interpreter again You don't need to wait for apt or yum or npm or did not finish to copy things just so that you can change a single line of code in Your service and this is why I think this is one of the best ways to adopt nix is to use it to build docker images This helps you bridge the gap so you can experiment with the tools you want to use while still having the tools that you have to use Without breaking your existing workflows asterisk as An example let's say I have a go program that gives you quotes from I don't know Douglas Adams I want to deploy that that program to a platform that only takes docker images like fly.io railway or Google Cloud Functions In order to do this I need to do a few things First I need to take the program and put it into a nix package And then you know make sure that it works or whatever and then I need to turn that into a docker image Upload it into my local docker daemon Yeet it into the cloud and then deploy it and hope it doesn't break Here's what the package definition looks like in my project nix flake. Let's break this down into some parts This project is a go module. So I'm going to use the build go module template This will set up everything you need it'll set up your go compiler It'll set up a C compiler for C dependencies and if you use anything beyond the standard library It'll take a hash of the vendor folder of your dependencies The name of the package is Douglas Adams quote in kebab case that the kebab case isn't required I just think it looked better The version is automatically generated earlier in the file and out of scope of the screenshot and the source code is in the current working directory and Because I don't need anything other than the standard work standard library and go because HTTP is Server and client is in the standard library. I can just pass null here if you need to external dependencies You should probably use a tool like go mod to nix. I linked it in the description at the end of the talk Now that we have a package definition you can build it with nix build dot hash bin That makes nix build the bin package in your flake and put the result in dot slash result and as you see You know you do nix build dot hash bin you run the command with dash dash help Let's pretend that actually tests the thing and you know We look at it with file and see that it's an executable with some random garbage because that's how nix makes things deterministic So now that we have the package we need to put it into a docker image And there's this neat family of helpers in the standard library called docker tools Docker tools is a bunch of wrappers that you have different opinionated ways to put nix packages into docker images There's two basic ways to use it there's making a layered image and a non-layered image a Non-layered image is the simplest easy way to do it You basically take all of you take the program all of its dependencies any additional things like CA certificates You put it in a folder in a single layer and you push it up and pray This does work, but it really doesn't let us take advantage of a lot of the unique properties of nix Making any change to a non-layered image means that you have to push all of the things that haven't changed Nix knows what your dependencies are so it should be able to take advantage of that when building a container image Why should you have to upload new copies of glibc and python over and over it just doesn't make any sense Nix also lets you make a layered image a layered image puts every dependency into its own image layer So that you upload only the parts that have actually changed Made an update to the webp library to fix a trivial bounds checking vulnerability because nobody writes those in memory safe languages in anodominium 2024 Yeah, the only thing that you need to update is the webp library layer It's great The reason why this works is because there's a dirty hack at the core of Docker that nothing can take advantage of Docker is actually a content aware store and the image that the layer the ordering of layers is added on in post However, if you do this with the normal Docker build flow You're never going to be able to take advantage of it because it just doesn't expose the right hooks to really use it You know what does though? Nix a Layered image means that every package is in its own layer. So glibc only needs to be uploaded once Until we find yet another trivial memory safety voter ability That's been on this that's been ignored it for the by entire time on this planet So we need to have a fire day rebuilding everything to cope. That's the best part So here's what a layered Docker image build for that Douglas Adams quote service would look like Again, let's break it down You start by saying you want to build a layered image by calling the docker tools dot build the layered image function With the image name and tag just like you would with Docker build Now comes the fun part drawing the rest of the aisle except it's just one line because we tell Nix that we want the resulting command to be running the Douglas Adams quote server and Everything will just be copied over for you glibc and whatever it needs I think it needs live gcc now all timezone data all that copied over for you. You don't have to think about it. It's great and In theory if you need to add the CA certificate root because Nix doesn't ship it in Docker images by default You just add it to the contents and in this case. I say contents is has the CA cert package My website uses this extensively to copy things like dino types and the doll tooling So when you have this you type in Nix dot hat mix build dot hash docker and whack the inner key and a shiny new Image will show up in dot slash result load it using docker load angle dot slash result and it'll be ready for deployment So if you've never seen this before this is a really neat tool called dive It is an interactive docker image explorer that allows you to see the different layers of an image and in this example video I'm going through and Show it going down on the left it to the layers and you see how each individual package is its own layer It's gonna go backwards and you know unconstruct the image right here and It's it's so great if you've never used dive before I really suggest checking it out because it'll tell you how bad You aren't making docker images and so that's it We've done it and all that's left is to deploy it the proud cloud and find out if we just broke prod But it should be fine right So the really cool part about the layered image flow is that it will work for cases where you have one repository with one service But that content aware hackery it does not stop at just one service If you have multiple services in the same repository, they'll share docker layers between each other For free without any extra configuration I don't think you can really dream about doing this with normal docker without making a bunch of common base images And hackery and pain and suffering and goat sacrifices and we're running out of goats as a practical example I have a repo that I call X. It's my Experimental monorepo where I have like a decade's worth of side projects tools Experiments and other things that help me explore technology. It's also a monorepo for a bunch of my other projects Yeah, this is a lot of stuff. I don't expect any of you to read that So I made it small enough so that it goes below this threshold for 2015 vision So that none of you can read it don't read it and most of this is deployed across like four platforms And I've been slowly converging on deploying everything to docker images just to have some shred of sanity left But because it's all in the same repo it's all in the same flake everything shares stuff I push updates to Mimi. I push updates to ZDN It's magic. It saved me so much time and upload speed and it's just So great take that managed Nat Gateway x agress fees. Oh Wait, do you hear that? I hear it too. It's the pedantry alert Yeah, in theory you really can do the same flow with docker you have to put like installing packages in different layers and You can do it, but the problem is it makes your build steps look like this Like sure you can do it, but you can also write memory safe C and It's selling all your libraries in separate layers invokes the wrath of general protection fault And that's not some that's not somebody you want to meet at 4 a.m Not to mention you have to turn the network stack back on during builds, which means oh wait now your images aren't reproducible oops You'd have to rejigger search pass see-go flags compiler flags and goat sacrifices again It's just a mess But compared to that just look at this This is building a docker image in like four lines and one of them is a in four lines and a closing curly brace That's it. You it takes care of the details for you. So you don't need to think about what's going on You don't need to copy G lib see yourself. It just does it for you. I love it so much But that deterministic reproducibility thing I've been harping on yeah, we're going hard into it now Nick's gives you the ability to time travel effectively and build software as it was a year ago This lets you recreate a docker image at a future date when facts and circumstances demand because you had one on-prem Customer that was allergic to upgrading and they ran into a weird issue and now you need to recreate production from a year ago so In theory when you're writing package builds with Nick's today You're taking from the time that you would have spent to recreate it in the future You don't just build your software though when you make a package with Nick's and flakes You are crystallizing a point in time where the entire state of the software universe converges to get your result It is beautiful as a more practical example. I've been working on a project called ZDN for a few years It's my contraint distribution network back end. The name is an exercise for the listener Here's how easy it is to build a version from 14 months ago One command that is the entire command. I say I want to build something from the github repository Z slash x at an arbitrary commit hash and get the ZDN docker target with a URL fragment and Then I wait for that to run and I load it into my docker daemon and I get the same bytes that I had in 2023 Go 1.19 and all This party trick isn't as easy to pull off with vanilla docker builds unless you pay a lot for storage have repository mirrors and you know more goat sacrifices But The even cooler part is that when I hit enter on that build command on my local machine. I Didn't actually build a thing Because Nick's has the idea of caching outputs of previous builds so that you don't have to do it again in the future A Nick's cache is effectively a safe place to put the output of builds so that the builds don't need to be done If you're building something locally with Nick's that's because it could not find the thing in the cache So that it could be lazy and not have to do it so there's a popular ruby library called no kugiri and Every time it bumps everybody's developer laptops have to go into like jet Mach 7 jet fans building the horrible XML parser every time it gets bumped by like 0.1 femto versions With Nick's in the development environments. It would already be built for you So you don't have to have your MacBook even like get warm It's so great. This is the way to really use the power of the cloud to your advantage in my in my experimental Mono repo I use a service called garnix to do ci for my stuff and on every commit it built it It pushes the built stuff to a cache and then my laptop could just download it. It's beautiful. I Just push button receive commit status reports It's super great because I don't have to think about it and it was literally zero config in my case. I Even have all my homelab machines rigged up so that they're built with garnix so that when I push a configure a change To my configuration repo every night at like 7 8 p.m I don't remember how it changes because it's midnight UTC and daylight savings just happened They pull a new version of the care there config they pulled the stuff from garnix They update themselves sometimes they reboot and it's just great. I don't have to think about it. Everything comes back up asterisk Not to mention I don't ever have to wait for my custom variant of your Sevka to build because building fonts is Apparently RAM expensive on the order of 20 gigabytes. Why? I don't know In conclusion though Nick's is a better docker image builder than docker's image builder Nick's makes you specify the results not the process that you use to get there Building docker images with Nick's is easy to makes it easy to adopt Nick's if you already use docker Nick's makes docker images that share layers between parts of your monorepo And Nick's lets you avoid building code that was built in the fast thanks to binary caches and Then the end you just get normal ordinary container images that you can deploy anywhere Even big platforms like AWS Google Cloud or fly.io Before I get all this wrapped up. I want to thank everyone on this list for their input feedback and more to help Make this talk shine. Thank you all so much and And thank you for watching I've been Z Yasso and I'm going to linger around out in the foyer or whatever the word is in French and If you have any questions, I'll be there, but if I miss you I Email docker image at zserve.us That pings my phone and gives me a notification that I can't ignore until I reply to the email If you want to help work with me at fly.io. My team is hiring ask me for details and catch up with me if you want stickers I Have some extra info linked at the queue what oh did I forget to push it? Guess guess who forgot to push get press get push before this before this thing I was busy setting up my own camera and stuff. I'll just Do it live let's do it live Okay, I can't actually see this I Okay, so When I say I can't see this I mean I literally can't see this Oh get come Get commit dash SM YOLO. Oh Get pull rebase or get stash get Get pull rebase Okay, it should be it should be up in like a minute Yeah, I can take questions The question was is there a practical limit to the number of layers in a docker image? Yes, and that that limit is 128 and it's dictated by the file system drivers that docker uses However, if you get over that limit nicks will just take the least popular dependencies and make them buddy buddy together And it'll just squish multiple packages into a single layer. Otherwise it will just Like it will have the most popular things be in their own layer. So you don't have to upload glibc 50 billion times um Probably 10, but I wasn't trying to optimize it if I really tried to optimize it I would like to have it use an embedded tz data and you know like build with seago enabled equals zero So you don't have glibc, but like Realistically glibc is going to be shared between all your projects. So like it's effectively zero cost at that point and Like having a libc is more convenient than not having a libc even though nobody can write c safely Does the build process the of the image require root permissions? Um, I Don't know what nicks requires. I think in theory you can do it with nicks rootlessly You don't get the turning off the network stack side effect, but it would be able to mostly reproduce the thing Yeah, I think it would work Yes, if you if the next Raymond is installed as root I believe if you're in the allowed users, it'll happen if you're not in the allowed users I I don't know because I haven't actually used nicks and environment where I'm not in the allowed users you Yeah, the the good question was it are you using docker's image layer caching system and the ordering? The secret if docker's image layer caching system is that the ordering is a lie that is just convenient to Like do things it doesn't actually need to exist that way the layering is just a Side effect of the implementation the nicks works around it by creating the layers Individually and the ordering is really just arbitrary at some level. Yes What is the minimum cache storage you need to build from a hundred to a thousand containers? I am not sure. I haven't built with that many zeros I build 10 images or 10 to 15 images most and I don't have to pay for the storage so I don't really think about it You back there The question was that what was the link between the cache and the determinism? Let me scroll back to one of my screenshots of my terminal here. I Have a lot of slides. Okay So you see this you see that Nick store path under next to interpreter there where it's like slash Nick slash store And then a bunch of scary letters and numbers those scary letters and numbers are a hash of all of the inputs to the build and That it forms the path for the output It's not quite a content aware store because it doesn't actually check what the bytes are because you because you can't really tell What the output is before you build it? and it will Yeah, it this is the inputs of all of that like It assumes that things are deterministic which they mostly are there's some non determinism that sneaks in because people embed the date That's something was compiled and as we all know time is the ultimate in non deterministic automata But you can't link the wrong dynamic library essentially It's difficult to impossible and this scheme does actually make your dynamic libraries effectively statically linked asterisk Any more questions? That's it perfect. Thank you so much All right, we have a five-minute break until the next talk real quick if you lost a ring I Have it If you'd like it back you can bribe me This is on now Okay. All right, everybody. Let's go ahead and get started with the next talk so if you're Standing let's go ahead and take a seat. Do I have to be the teacher and make everybody be quiet? All right, we're gonna go ahead and get started everybody. Please take a seat If there's people around you standing try to make room for them. I think there's a bunch of room at the back Will go ahead and take it away. Hi My name is will fan sure and today I'm going to talk about Nick's what Nick's of us calls stage one and How we're using system D to improve it Now a lot of you might know me a better as Elvis Jericho, which is how I go on github and every other online platform It's a little bit of a silly name You know, it's not like initials or anything like that bear with me. I promise. This is relevant and So I can't I had this name because when I was in middle school I was trying to sign up for some online gaming service and I tried this name and that name and After a million tries eventually the gaming service just told me gave me the suggestion and told me to use it And I was like, okay, it's spelled a little weird, but whatever so For a long time I tried to keep it anonymous where I would just it was the name that only my friends knew and only my friends Knew that Elvis Jericho was will But eventually I started using it to contribute to open source and that quickly dissolved So nowadays this name is effectively synonymous with me and it just it highlights something that I've learned Which is that these very small decisions that we make can end up having very influential effects on our lives and so That's what I want to talk about with system D stage one and what we what we've done to to bring that into Nick's OS So first of all, what is stage one? It is a terminology that is pretty unique to Nick's OS Most distributions insist and other products will call it something like in it Rd or in it ramfs but We tend to prefer calling it stage one because it's it refers better to the time period During boot that we want to talk about in it Rd or in it ramfs might better describe the particular file format That you're using to represent this concept But with that out of the way Stage one is the first user space code that the kernel calls once the kernel is started up So when your bootloader gets going it loads up your kernel and it loads up a in it Rd archive And then the kernel unpacks that into a memory file system and runs its init process as PID one and Its job is to find and prepare your operating system This is we do this in a separate stage rather than in the bootloader because by harnessing the power of Linux we can do drastically more complicated things with it and That can include like, you know things like disk encryption or network systems or many other things But the main job is to mount the root file system and other what I'll call core file systems These refer to things like the nix store or the Etsy directory that will that will populate with Etsy files and things like that That are necessary to start the operating system Once all of this is done the operating system can start and that means that We will find the init process for stage two and we'll switch route to the new root file system And exact that as PID one and now the the actual operating system gets to start and we leave the realm of stage one Oh, also, you do have to do nixOS activation Which when that happens depends on if you're using the old it in it already versus the new in it already we can talk about that later So what we have right now is called scripted stage one and it is effectively just this shell script and these options This shell script is Just this I think it's a couple of hundred lines long or something like that. Are we back? Hey, okay, cool. So Yeah To future speakers, don't touch that particular switch So this shell script is just it is the sequence of steps that need to be taken by just about any installation of nixOS and It is influenced by these other options the first one the file systems option gets Passed into this stage one script and the stage one script will iterate over all of your file systems Particularly just the core ones, you know nix store the root file system and all those and it'll mount those and this script has Knowledge of how to do that best So it knows which ones it needs to run a file system check on or which ones that needs to be skipped It knows which ones it needs to modify some of the parameters for But the main job is to mount those file systems But nixOS also has a lot of abstractions that get added into the init rd things like lux encryption networking clevis in a variety of others and These are always implemented in terms of these other four options things like pre-device commands post device commands and all that these are Command or like little shell script fragments where you can assign to one of these options Just a fragment of shell scripting and that will get interpolated directly into a certain spot in this script file and so the pre-device commands option for instance gets Substituted into the script just before we start you dev The post mount commands gets it gets interpolated right after we've finished all the mounting right? So these things all happen at specific points in time So but what is the problem? Well, it's one of those little things You see I found this one option in particular that bothered me quite a bit when I first saw it it's this boot init rd lux devices pre LVM option and This one's really weird because what's happening is when you set up a nixOS system on an encrypted lux device You have to tell nixOS. Well, does that lux device get unlocked before we find LVM devices or after we find LVM devices? And that's just a little weird because it's it's an it's a dependency that is There's only one way to do it right one way to configure it in Theory this could be arbitrarily complicated right you could have a lux device on an LVM device on a lux device on an LVM device And so on as much as you wanted and you know insert whatever complicated thing that you want it in between And this will not allow you to represent that so you would have to do something custom The other thing is that this whole thing is serial and imperative It's just one big shell script that does one thing at a time and it's all written in this very Shell scripty kind of way and Probably the worst part is that it's all custom there's a lot of custom code So we have just hundreds of lines of shell scripting that just implements a lot of logic that is all done Serially and it's all kind a little codependent and it's really awkward to maintain and it's it's frustrating to write So what about system D? Well, so first of all system D is PID one on an XOS system Its main job is to bring up your applications and your services and it'll do so using what's called Or what rather I should say it manages the processes the mount points and the devices that can constitute a functioning system which is to say like When you have nix of us running it is keeping track of what processes are running Where your file systems are what devices they depend on and like how all of these things interact with each other And it does so with the concept that it's called units, which is a declarative concept the there's a lot of different types of units, but the three we're going to focus on our Services devices and mounts and a unit is just it's a not it's a logical construct that exists in this Graph this this dependency graph that says this unit depends on this unit and this unit depends on that unit and It allows you to specify things like arbitrary ordering where you can say this one needs to be after this one But I don't care how it relates to this other one over here so this allows you to make very complicated logical graphs of how you want the system to To start up and what's important is that these things are declarative and they're parallel so these things can start up whenever they're needed as soon as they're needed and You don't have to like write write code that just says do this then that then this then that You get to put a unit exactly in the graph where it belongs and system D takes care of the execution So if we put this in system D areas or in stage one, sorry There it it comes with a lot of Configurations and tools included that we can use to drastically improve the way that stage one operates Again, as I said, it is declarative Which means that we no longer have these large fragments of shell scripting that we have to just concatenate into it into another shell script We just get to use declarative unit files that declare the dependencies again with you know arbitrary ordering and complexity It is parallel which means that as the system is booting up the things that need to start Don't have to wait on each other for no reason This can in some cases though in my experience not too much Improve the performance the runtime that it takes to get through stage one. It can make it a little faster and And it Importantly comes with a lot of tools that we can use out of the box We have so much custom shell scripting in the scripted stage one that we can replace With tools that come straight out of the system D project that we don't have to develop and maintain ourselves So when I'll talk about a few of them now and one of them the first I'll talk about is the ability to use rescue and debug shells So when the system goes into like a rescue or emergency mode during stage one you get to get dropped into a shell and You get really useful tools like system control and journal control which let you take a look at what's going on What unit calls the system to fail to boot and also you can look at like specific logs and and really use the power of journal control to See what what information you need out of the journal This journal also survives to stage two which means that once you're in stage two and the systems booted up You can take a look through what happened while it was booting up in a really fine-grained detailed way And then finally if you are messing around in the shell and trying to fix what went wrong Like say you something was wrong with the way the file system was mounted You can mount it yourself and then just run system CTL default Which will cause it to retry the default transaction and that'll cause it to just try to go to stage two again And it'll go straight back into it. We also have system D network D. So in scripted stage one we have a Scripted networking implementation, which is just the simplest possible network implementation, and it's extremely unreliable With system D. We are able to use the system D network D stack instead, which is very reliable. It also comes with a declarative Declarative method a lot very similar to units that allows for these arbitrarily complex network configurations And so far this has seemed to be significantly more reliable and easier to work with while also enabling significantly more sophisticated network configurations and the thing that networking is useful for in stage one is that you would You might use it so that you can ssh into the stage one and unlock say your root disk remotely So you don't have to be you don't have to be at the keyboard to enter the password to unlock your root drive if it's encrypted Or you might you have like a network file system where you might do remote attestation to acquire keys anything like that next we have system ds password which is a a Common protocol for for allowing services to ask for a password that can be provided by some kind of response the scripted in an rd uses a number of unpleasant mechanisms where most of the time it's just asking for a password directly on the console and then like if you ssh n or something you can Enter the password on some other means and then kill that process or something or use this like weird bespoke command That someone developed in order to enter the password remotely but with system ds password it just the service asks for it using this system d protocol and then You can answer it in your ssh session or you can let Plymouth which is the the the Graphical boot UI implement its own system ds password response protocol and then you just get graphical password responses for free and All of this goes through the same interface, so no one has to do any kind of special work Now another thing that you can do with with system d that just comes out of the box is The tpm2 or phyto 2 or ube key integration system d crypt setup, which is how you do lux encryption supports these things out of the box and All you have to do is use the system d crypt and roll tool in order to set up the lux slot for it the scripted The scripted stage one has some really complicated shell code for dealing with ube keys and it's very unpleasant to work with So it's very nice to have these things as an alternative that just comes straight out of system d These also really complement UEFI secure boot quite nicely NixOS has a project called lands a boot which can sign it can do self-signed secure boot for NixOS and Combining that with things like the tpm can be very powerful Now I have an example here that I'm gonna Just explain how the setup works now. I don't really it doesn't you don't need to understand this So it's gonna get complicated and you don't need to worry about it The point is just to show the kinds of things that you can do with this So I'm one of my servers. I have a root file system that is encrypted with the ZFS native encryption There's an asterisk there because I don't necessarily recommend ZFS encryption, but I'm using it because I can knock on wood and This ZFS data set is unlocked with a key file that is stored on a lux volume That lux volume is actually stored on a Z vol on the same pool So you've got a virtual block device which exists in your ZFS pool Which contains a lux volume that gets decrypted and then that is used to decrypt the ZFS data set The lux volume is is unlocked with a combination of a tpm2 and a passphrase which system decrypt setup supports out of the box That passphrase is entered over SSH so I can unlock it remotely over a tail scale VPN The SSH host keys and the tail scale state are stored on another lux volume Which is also on a Z vol on the same pool Which is unlocked automatically by the tpm with no password required And this is so that we can share the the stage to A tail scale state and SSH host keys fairly securely while still requiring a passphrase for the root data set So ultimately what we're accomplishing here is that we've got the tpm securing both the the The SSH and tail scale credentials while still Requiring me to enter a password to to proceed the boot with the root file system All protected by by the tpm and I've got a visualization here if that helps at all But yeah, you can see we've got the ZFS pool stores everything You decrypt the the SSH keys but with by the tpm so that that happens automatically you enter a passphrase over SSH That combines with the tpm to decrypt the root key to unlock the root data set Again, I don't expect you to care about this. This is a really silly situation to find oneself then But the point is that you can do these really complicated things in system D both has the tools to do these interesting things like tpm's While also having the ability to specify these orderings and constraints against each other in a really elegant way So now I want to talk a little bit about the roadmap that we have for this for using system D in stage one because it is an alternate implementation of stage one so first of all in 22.05 it was added as with an Experimental note in the documentation where all you have to do is is enable this boot in it already system D enable Option and that'll turn it on so for a lot of systems. That's all you got to do and that's that's that'll make it work In 23.11 we've removed that experimental documentation and called it stable We think that this is going to work for the vast majority of people so In 24.05 we would like to make it that a fault There's going to be some minor incompatibilities and in those cases we will try to detect it and automatically fall back to scripted in an RD But we would really like to try and move this to being the default in NixOS In 24.11 like I said the scripted networking in particular is pretty unreliable So we're planning on removing that particularly early if we can and Then finally 2505 maybe we'll see how the feedback goes We would like to remove the scripted stage one all together and we being myself and a variety of other NixOS maintainers Most of whom are in the NixOS system D matrix channel So this is kind of the plan that we have for it. This is The timeline that we've been working on so that you can tell this has been in the works since early 2022 and Hello, that's weird. Sorry. So To conclude Really, this is all about the small things right stage one is realistically only a few seconds during boot It's not relevant to the vast majority of your systems functioning and Really just one little option is what caught my interest and got me to start working on this But in the end it resulted in a massive collaboration between myself and so many other NixOS maintainers who I respect the crap out of This little thing really led to a huge aspect of my history and open source So if you take anything away from this talk It should be if there's a little thing that you're interested in With NixOS whether it's a package that you want to contribute or a suggestion to documentation or one option that you're particularly dislike then You know you should go ahead and contribute because it can have a really big effect Thank you. My name is Elvis Jericho. I Saw you first the vast majority of options. Oh, sorry. Thank you. So the question is about What to what degree do we have compatibility? Will your existing NixOS config work if you just flip that switch that says system D enable and the vast majority of Options relating to the stage one will work out of the box. There's a couple that won't we know those ones and They will automatically fall back to the scripted init RD if you need it But so the main ones that are that will do that are things like those those commands fragments that I mentioned at the beginning Like post device commands pre-LVM commands those things we have no intention of supporting those in the system D stage one It's not exactly that we couldn't we could create like a little Synchronization unit, you know something that's like it just runs the that script fragment and orders that script fragment as a unit After all of these and before all of these but really that acts as a choke point that eliminates a lot of the benefits that were That were going after here, right? It it it puts a choke point in the graph It makes it less parallel and also it preserves this Imperative sequential style that we're trying to get rid of in the first place So we think that it's better to instead just When we see you're using that throw a warning say hey, maybe try and figure out how to make this a system D unit And then fall back to scripted at stage one Yes, it'll fall back. Yeah Yeah, that so that'll happen at eval time. So the question is do we have some way to make smaller unit files? And I'm not sure I quite understand the problem. I see so the question is yeah If we wanted to to take, you know What we're using to create units for stage one and use that in some kind of other context I think is that about what you're asking? Yeah That I don't think really exists at the moment I've heard of people trying to make things like that I don't think it would really be anything to do with stage one in particular But like I've heard of people trying to make it so that like you can use the system D API's that NixOS has in order to manage system D units on like Ubuntu or whatever So that exists. It's it's definitely not like upstream and Nix packages though Okay, so the question is about BcacheFS support in particular with multi-device and did you say encrypted? Yeah, multi-device and encrypted BcacheFS file systems. That is a tricky subject The big problem there is that There isn't really a good Implementation of system within system D for this sort of thing The problem is that like you know the best way to import or to sort of mount your BcacheFS file system is with UUID equals yada yada yada right and that works great if you specifically use the mount dot BcacheFS helper command But if you're trying to do anything abstract it immediately gets torn apart because the like util Linux or even system D All of these things will see that UUID equals blah blah blah and tear it down into an actual path dev disk by UUID all this Stuff and BcacheFS doesn't like that So there's a bit of a mismatch between how BcacheFS is trying to do this and how everything else in the Linux EOS system is trying to do this and it's really hard to figure out how to fix that So the alternative is that you can instead specify things by devices separated by colons yourself But even that gets a little tricky because now system D sees oh this begins with slash dev And it tries to wait on that specific device But that device is one with a colon in its name that is then followed by other stuff that it doesn't realize are just More devices So system D just doesn't have a good way to do this The reason it works in scripted stage one right now at all is basically luck And it yeah, it's it's really kind it's it's a tough situation and there's a lot of theorizing going into how to fix that but there isn't an answer at the moment now It would be really good to to have some way to detect that we don't have that yet But that's gonna that's something I would like to look into however I will note BcacheFS is an experimental file system So I'm not gonna feel too bad if we don't have the best support for it So he's asking about the step that we do at the very end of the scripted stage one where we do a switch route and So the way that we do this in scripted stage one is we literally just run switch route Which automatically changes the root file system of PID one to the new one the real operating systems route And then the way it happens when you're using scripted stage one is it runs your NixOS activation immediately as PID one We had to change that up a little bit with system D because we there's one thing that we really want to be able to do here System D can serialize all of its state all of its knowledge about units and stuff and pass that along in a file descriptor to the Stage two system D. We really want to keep that but it doesn't do that if PID one in stage two isn't literally the system D binary So what we have to do is run activation in a CH route as a unit within stage one in the system D State one So that's how we're doing it right now And you know the benefit is that we get a much better insight through tools like system D analyze as to what was going on during stage one But yeah, we did have to do that a little differently Okay. Thank you very much Alright, thank you will take a couple minute break and then we'll have our next talk All right folks Let's go ahead and get the next talk started. So please take a seat if you're milling a route All right, let's go ahead and start the next talk Take it away So Everyone here's me. I think so perfect All right louder Okay, I can try So hello everyone. Thank you for having me today My name is Pierre Pennings and I'll be talking about how we can make Nick's ways easier for self hosting by using module contracts So first a little bit about me My handle is a bit of man. Thank you will by the way for giving me a reason that I so that I no need to explain What where it comes from? so I'm I Figure I find myself as a self hosting and data sovereignty advocate At fastly I work as a staff engineer on the next generation web application firewall product By the way, there's a cool fast forward program for open source at fastly and fun fact I could be I learned while I work there where the CDN for the next packages binary cat binary cache. Sorry. I mean Thank you. So before starting We need to go a little bit and follow my adventures Using Linux so it all started in 2008 when I switched from Windows to Ubuntu, you know Windows was the desktop computer at home and I Saw this on the internet like comp is I saw cool stuff things you could do on desktop and I absolutely wanted that So I swear I forced everyone to switch to Ubuntu But then I kind of really liked aptitude I don't know if you if you know that program But it said essentially a catalog of all the packages you can install and I literally spent hours browsing through all the descriptions reading everything trying things out and for me that was really fun You know so my big takeaways for this part of my journey was I like when fix look good And also a catalog is really something I like and I want to find out But then at some point I went deeper I wanted to learn more about Linux and the thing is sometimes I had issues understanding Exactly what well the issue was when I encountered the problem in Ubuntu like the documentation is really good But it still has this layer of magic on top where Sometimes it's just it hides too much So I switched to Arch Linux mainly because I was reading the wiki and the documentation is pretty good And I could do everything essentially without needing to talk to another human which Is good sometimes But my big takeaway for that is yes, it allows onboarding and self-service if you have good documentation Then some well later, you know, I had this Arch Linux box at home And I installed a lot of things on that thing like I was trying everything I could but it was just too much like I was doing imperative, you know going SSH in and trying to Do to tweak everything and when I upgraded the box, I forgot what I did and it was just a mess So at some point I figured out hey There's this emacs things I'm using to edit things and everybody knows emacs is not just an editor It's also a full operating system. So I was like, hey Shouldn't we use an operating system to configure an operating system? And I was like sure let's do that. So I use all the features I could find of emacs to Well, look at this It's essentially your interactive checklist of things I needed to do to install a service on my box So you see at the top there's variables some comes from a secret store And then you have multiple steps of things I needed to do So just example you don't I okay. Perfect. You can read it So yes, there's an install step here. There's a step to create a postgres user There's a step to generate a file that gets copied over to the to the server To configure the reverse proxy and then I wanted to add key clock in there to have a single sign-on at home I mean, I'm crazy. I guess but that was pretty cool And I tried to do that, but I needed to shout out to bash scripts because it just became too complicated and This thing I mean, I know what you're thinking, but it worked. It was I was very proud of how it worked But I came to this like this is like the top level, you know, everything I installed and it just part on Pardon and at some point I was just copy-pasting everything all over the place and it became again unmanageable I Was doing software work, you know on the side. I mean, I guess to pay the bills. So it's not really on the side This is on the side But you was using there a real programming language and you have functions in there and that's essentially what I was missing But the other thing I kind of figured later, you know going through all this and then essentially Preparing this talk is that what I was missing also is this one source of truth. We have index Because on my machine I was doing stuff and then I had this file and then stayed drift apart like literally Two days after I started all this things drifted apart and it was very hard to maintain and so I Kind of knew about this next stuff. So I wanted to try it out So Yeah, I have these beautiful lines. I mean remember I like things that look good and this Services dot next cloud that you enable equal true installs next cloud on my machine Like remember like the things I had before that I need to do and now I can do just this This enables engine X for you. It sets up php php fpm It sets up postgres or any even can let you choose between multiple databases that you want to set up And it also allows you to choose between different caches you want. So I was just hooked up, you know, like I Since the moment I saw that I thought okay, I need to use Nick for everything now and Also, if you remember all my takeaways like it checks all the boxes like it has good looks Obviously, it has a catalog and by the way, I learned yesterday about the next search thing So I will definitely use that it has honestly I think it has good documentations is I we all joke about it and I think it's not perfect also But I most of the things I wanted to do could be done without talking to another human which again It's a good thing I guess for onboarding and self-service again And it's obviously a programming language and it handles the one source of truth Pretty strictly. I think we can say but I guess it's for the best But then that's when the honeymoon stopped a bit If you remember, I mean, I don't know if you remember from the slides I just went through but I was using HAProxy as the reverse proxy for and next cloud on my box at the time And I wanted to switch over, you know, like gradually to next OS, but then I hit the wall where Inside the next cloud module Everything is hard-coded for engine X and you cannot really choose Another reverse proxy and actually if you go in the wiki it tells you yeah You can definitely use something else, but just do make force disabled engine X and then you're on your own So coming to net and coming to next that was not a good Experience, you know, and so yes over reverse proxy. I was using each a proxy, but what about caddy traffic, etc And another thing a random thing I noticed that we hard-code pretty much everywhere is path to SSL certificates I don't exactly know the reason why but you know, we always kind of hard-code slash bar slash lib slash I forgot now exactly where those lives, but coming from the software world I feel like this should just be a variable, you know and then I Guess the thing that really bothers me the most is that like I was saying before you can choose in the next cloud module Yeah, which database you want to pick which cache you want to use but that work is all done Inside that next cloud module and there's no way to reuse that and I thought to myself well That's kind of a wasted work, you know in a way like it would be cool to extract that And so this is how I picture the situation in my brain you have and by the way I'm talking about the next cloud module just because that's the one I know the most and I know it does a lot of Things I'm definitely not picking on it to the contrary. I would want the work that's being done in there to be helpful for other people and So again, this is how I picture the situation. There's a this core module in the in the middle of the next cloud module service and Then you have auxiliary stuff, right? You have but the thing is to the thing to realize is to me This is you have a lot of implicit contracts in there if I just take one example say you set up my sequel Or I mean you select to set up my sequel Well, the thing is how the next house maintainers Set up my sequel is how they want the court to be able to use my sequel, you know, there's like They set this up in a very specific way so that the court can use it in the way that the court expects And that's completely implicit. Maybe there's a comment somewhere in the file But that's essentially that was in the brain of the maintainers how they set it up to pass the test and everything but It's in a way all gone as soon as it's pushed on github and you don't have that context anymore so the what I would love the community actually in general to go to worse to is to Get all these contracts out of there In something like the documentation or maybe something better than that But as a first step, I feel like we should have this next cloud module be mostly the core thing and then it would branch out to Those what I called contracts which would be say Let's take the example for the reverse proxy you you this would allow this would Show a few options that you can pick and choose For the core to to use and then on the other side, you would have the engine x implementers Knowing hey when people set these options in the contract. I know that my reverse proxy must work this way and so think of it as a splitting the concerns here and the Natural extension of that is that well other people can then add more implementations for these contracts completely impendently of What the maintainers of the next cloud are doing and you could even do that on your own machine? I mean I should not encourage that but I meant not upstreaming that to next packages But you know if you have things in your in your company that cannot be upstream Well, you can still do that and plug and play inside that next out module for example And how I picture this Separation of concerns is now you have multiple people in loop You have the maintainers of next cloud that uses this contract this abstraction on top of modules that exist You have all the different maintainers of all these implementations that Made work to respect this contract and then you have For self-hosting very important, but I feel like for pretty much everything we do There's this end user and now have the choice to pick whatever implementation they want to implement this reverse proxy thing that next out is depending on and that's to me very powerful and Something we don't really have now or at least it's hard-coded like I was saying in various packages here and there and then For this reverse proxy at some point there needs to be talks right on exactly how this interface much behave and everything But I'm getting a little bit ahead of myself here the good thing the good news is that this Contracts already exists kind of in next packages if you take like the dot enable option. It's pretty much everywhere like when you have a service Everybody knows that everybody expect these options to exist already But also everybody knows how this option should behave without even reading the documentation You know that if you have a service that has that enable you set it to true then this service will start Next time you do the next activation if you set it to false then it won't or it probably won't even be pushed on the box And then on the other side Everyone that implements a service knows that this option is expected and that if it's set to true They should do whatever is needed to start the service and if it's false They don't need to do anything essentially, but that's to me a contract It's nearly an explicit contract, but it's more convention I guess then something that's explicitly stated that a service should have a dot enable option somewhere in the documentation and That's where I would like to introduce my project. It's called self-host blocks it's a name and the Whole reason of this project is to be proving ground for contracts It kind of didn't start that way, but it became that But one thing I found cool is Nick's SVM test and I use that to enforce those contracts and it Implements a bunch of things a lot of things are working progress to be honest But I actually use that on my own server. So I think there's a number person that uses that so we're two and to finish up I'd like to Show you an example of what a contract looks like and how is using inside that project and why I think that's Supervaluable, so let's take the SSL certificate generator contract. This is the contract. You have seven options There they have types documentation and everything, but we'll go through that with examples so To see exactly the point I want to make we need to take two examples first What happens when you regenerate the self-sign certificate and we want to use that and then second just after that I'll show you how that works with let's encrypt. Okay, so first the self-sign one you have this block here This options you need to set up You need a certificate of authority. I won't go into detail how you get that But let's say you get it and then you set the domain and extra domain options To generate the certificate for these domains Now let's switch on the generating for let's encrypt kind of similar You set specific options for let's encrypt and then you have again the domain and extra domains you want But now let's see how it's used. Okay, and Spoiler that's that's what I've been building up since the beginning of the talk. So this is where it happens And remember how there were those multiple people's Or interacting together so there was that end user no selecting which implementation Well, this is represented here in that Latin block the user here chose the self-sign Certificate, okay, and it's inside that cert variable and what's going to appear inside that in block is what the next cloud Say maintainers will be using and they will receive that as an option for the module system So here it's just setting up engine x virtual box And you see it uses the path that search and path of key option that they know exists Because of the contract and they would set it up as a SSL certificate They would set the group That the keys should have you know when they created the Schmat group So that they know that engine x will be able to read those keys they would At which services need to be reloaded whenever the certificates get regenerated here say every three months or something and Then oh, yeah, right So the question was this is this work already happening inside next packages or is it essentially? conventions that emerged and Would it be solidifying what it already exists or kind of new work? So I I don't know actually to be honest if something like that already exists the community is so vast and things happen in so many spaces I didn't find anything that does this but These are the ones I found and I think those emerged by convention and people just you know copy copy pasting or Copying on what works in other modules and it kind of is now a convention But yes, the goal of this would be to to have more thoughts about other domains of Again, I was thinking former server in your self hosting, but maybe other aspects I'm not thinking of like maybe some more codification of these domains and how we can make them reused In this phase and you had a question too Yes, so the question was or I guess a more a statement Yes, you're sure that the the Kubernetes folks did a lot of work that resembles that for example they You have an interface that multiple reverse proxy implements and to be honest I'm not that familiar with communities, but that really sounds like the same thing indeed So yes, we could probably learn a trick or two from there So the question was kind of what happens when you scale this out for the SSL generate certificate example Well, to be honest This is a really small scale for now It's a on my own box and then like I said on one of our people's box Elsewhere on the internet. So I didn't hit that yet nor any Kind of using nix at scale. So I'm sorry. I wouldn't be able to really answer on how to do the best way Yes, that's a actually good question. I'm not I Did it in a way in that project, but I'm not sure I really like it the way I did that has it's to have a common library of options and for example, you have for this generate certificate this contract you have essentially Natribute set inside that library with all the options and when you say for example, the next cloud module says hey, I want you to give me something that implements that contract it just Use it reuses that whole block of options and put it there as an option inside that module It works. I'm not sure if it's the best way to do that though. All right. Thank you very much everyone Alright, we now have a short break. We'll meet back here at 11 30 and My co-worker Tom will tell us about remote builders and substitutors. So I'll see you back here in about half an hour Folks We're gonna go ahead and get started with the next couple of talks and after that we will break for lunch So first up is Tom and he's gonna talk about remote builders and substitutors So give it up for Tom and take it away. All right, test test All right. Hi everyone Thank you for coming. I'm Tom. You might know me as Tom Burek I work over at flocks doing some wild crazy things. Hopefully and I was trying to figure out I Manics user for about 10 years now. I think at least I think I looked at Nick's packages. So approaching 10 11 I've been doing various efforts in different parts of the ecosystem Either in Nick's packages or just maintaining things trying to market it trying to spread the good word So at some point I'll say I'm a fanatic but in general if it's related to Nick's I am happy to talk about it at length. So Yeah, let's get started one thing I want to talk about is Today and yesterday we're kind of doing a lot of introductions a lot of brand new concepts And trying to teach people here today. I want to bring introduce Probably can't get all the way to the point of having everyone kind of using all these advanced topics very early But I want to kind of explain a little bit of why these are some of the things that are really interesting and cool Some of the superpowers that people might want to leverage and just knowing they exist is Something I want to get out of this Obviously, there's these can be a complex details Complex topics and you might need more work. All right, so yesterday we talked about like the cliff of learning and You know, there's all this stuff about like you got to learn the language You got to learn how to like build environments. What are flakes? What's the ecosystem? what is a package and usually once you kind of get past that cliff all of a sudden I think they're these like huge benefits and To that I really love are the fact that we have substituters and that we have remote builders So I wanted to just talk about them and then introduce them Yep So first we have to start with some concepts. I apologize, you know, we have to kind of do some definitional things just to make sure We kind of have a core Copied most of these things out of the manual that has actually improved greatly these days So if you take a look though like the glossary take a look at the definitions things are in there So what is the store? Right the store often we think about this as oh, this is just you know in my machine You know Nick slash store great. That's kind of where the files go That's where they are and that is one instantiation of it probably the most common one you'll see the one you're familiar with But there's a notion of a store. That's a bit more abstract. That's just well It's a place where you have data. That's immutable and it can have references to other pieces of data and It system will keep track of those references so that if you need one piece of this immutable data Let's say you move it around or you copy it somewhere or you need to like leverage it in some way Those references are tracked and remembered and therefore you could kind of move that entire thing from one place to another And there's a sense of integrity that this store has more than just here's a key value store, right? so these cross links these cross references are kept track and There's multiple types of stores So if you kind of as you start working more with this you'll start to see a few more you might have encountered so far something like HTTP store stores, right? This is if you go something look at a cache You'll encounter Other types so there's we'll go through some of those but just understand that these other types come with the different capabilities sometimes it's similar to what you have locally. Sometimes it's different Now you have a substituter usually the way this works is that you are working locally You want to either build something or copy something or use something and as an optimization what we do is we say Hey, let's go find it somewhere else and that way. I don't have to build it. I'm gonna do something that takes a long time and These any of these subsets any of the stores can actually serve as substituters That's a pretty powerful concept because now what this one object can be used in several different ways and Then lastly, what is remote building? Well, I don't know. I have a small laptop or I have You know different types of machines in different places. I want to go build on them I want to distribute work distributing work is a really like powerful idea the more you can do that the better and Nick's because it kind of saves all this Information about how to build something what it takes all that really good bookkeeping means now we can distribute work in a really Reliable productive way. So how can we leverage this stuff? So to learn more about this, I'm just gonna put some references up here So, you know, you can get started to figure out like what these are But you can go look at like the the help for the stores. You can kind of see what operations the store layer allows you to do There's a reference manual for this thing. So let's see if I can move this over That like lists various kinds of like store types it shows kind of how that you can use them They're described in this URI format. It's a little bit clunky Ideal you could have a bit more of a structured way to talk about all the different parameters and the options you have available to you But for the moment, it's this kind of in the query syntax URI thing, but it does what it does So there's some documentation for that to look at So you go look at the various kinds of stores The most kind of first one to talk about just like the automatic one the thing that you kind of get when you're using mix You're not really worried about this thing Auto will do basically the right thing if you have different kinds of a multi user install You have a daemon. It'll connect through the socket. If not, it'll connect It'll talk directly to your local file system or it'll do things where we'll talk to you there's something That's bind mounted somewhere but the point being auto is usually what you're just always using and it kind of does the right thing But we have other stores like there's local one is kind of very specific that says no use the local thing You know don't be too smart or there's a dummy one that basically just doesn't do anything and supports a very limited set of operations So if you want to let's say just run some Knicks evaluations of some expression But you're not interacting with the store in any way You're not putting new files in or you're not building anything. You're just kind of evaluating some sort of that thing Then dummy store is kind of a fast way a way to not even require there to even be us for you to have one Some other common one would be one that's on a remote machine. So be a SSH, right? We like to do remote things with that Another common one is S3 This is kind of where we often put things as well and So you can kind of see some examples here of how you you'll have other parameters and how you kind of include those options in there Not the most Again, not the most ergonomic that needs a little bit of work Now some of these stores have different capabilities your local store, right? Obviously can build stuff Some of these you can't right? You can't really just say hey I want to go build something in like cash Knicks OS org Doesn't quite work that would be mayhem if that if we did allow for something like that And also like binary caches have a different goal like they're optimized to do something different So that's a that's an outcome of that All right, let's go a little bit more into remote builders So this will work with any store that can do building You know, there's a list of those usually it's going to be something like over SSH. That's kind of where it goes There's going to be a manual for this if you want to kind of take a look at where this goes Here's what that manual kind of looks like It talks about introduces the idea it talks about how to like test this out configure it It kind of goes through some of the formats it kind of gives give some examples, but Kind of take that and see if that's helpful to you It's located at that url So if let's say you have some remote machine Let's say I'm on my you know on my Linux machine and I want to see if I there's something on my Mac That I want to either build or push builds over there So you can kind of start to test this out. There's like command line arguments and command line Commands that lets you kind of test this out. You can also Configure it that way. So when you want to do something dynamically that's available. So you can see here We're doing we're talking about a store. Hey SSH, you know colon slash slash Mac, right? So assuming I've got all my you know resolution and post names figured out and assuming we've got authentication And SSH connections are all working, you know, all the things are working Cool, this thing will say hey, thanks. The store exists. Here's a little bit of information about it You know proceed with life or I'll give you some errors This is a common place where I think people get stuck trying to use this feature set. So Yes, I'm just gonna knowledge this starts to get towards the advanced side We're gonna kind of figure out why did it not work What you know, is it that SSH isn't working is it because the post key is not trusted? Is it because your daemon doesn't have access but your user does like there all sorts of like weird little quirks here? Yes, it's hard. I'm just gonna simply acknowledge that What is the configuration for this go so? One common place you can put this configuration is in Etsy Nick's next con with all your other like system level configurations And you can kind of just use this format in there to specify a bunch of build builders You could do basically what you say is hey, here's how you get to this store. That's that you or I For the format. I'm kind of putting out there Here's the system or systems that that thing can support So let's say it's a way to advertise and say well, I'm gonna distribute this particular kind of a system to that particular machine you specify usually some sort of a Credential of the SSH files number of jobs Also a factor of like how how fast that machine is and how to kind of weigh between different machines when you're distributing There's features supported features required features. You can put in a public key so that that whole problem with Having like a host key not known is kind of resolved a lot of these are actually optional in fact Pretty much everything other than the first parameter, which is you are I is all optional again If you go look through the documentation it clear, you know talks about what they are easy to mess up here as well Again, got to read the documentation very clearly like a common one that I ran into is like your public key He has got to be this base 64 format of your, you know SSH fingerprint Another common mistake your Damon user has to be the one that it's the one that mediates this so if you're on a next to us machine Multi-user install you got to kind of make sure that your your Damon actually actually can do this One pro tip Builders use substitutes as a configuration option what this means is that it allows Well, all your remote builders to directly substitute by themselves without you having to push them it cuts down on You know some of the network transfer things in some cases so but usually highly recommended Why do we do this we like to use more machines and distribute stuff because sometimes one machine just isn't enough This doesn't cut it and we like speed All right Yeah, so other reasons we might want to use this we have let's say different systems, right? I have one machine with one architecture, but let's say I have to develop and provide support for more architectures well I want to be able to build for them. I want to be able to test make sure those builds work Yes, I could push it to CI But you know that's Painful and annoying and you push and then you try again because you made a mistake and they got to wait 10 minutes And you push it again and got to wait for the jobs to get picked up by your runners Like oh, what if I just say I want to run a bunch of builds on all my remote machines already set go Right instant feedback faster development speed and then when you know what you're doing You know those things are all cashed nicely and you can start like pushing things a bit with a bit more confidence Room builders are good because makes things faster, right? If I want to do a large number of builds and they can be split into like smaller chunks and I can distribute them Hey, that's that's awesome, right big build graphs something that's brand-new Let's say change something very core in some sort of dependency. I'll need to do I don't know a hundred builds Well, let's start doing them in parallel. Why not? Another example is you you have a slow local machine or let's say your local machine doesn't have really good Network access, but you have kind of SSH access to something that is way faster That's better located closer to your caches or closer to you know the internet backbone and Yeah, so here's kind of how I use sometimes use it. I literally just say hey I'm working on something. Hey, I want to build you know out of my you know default package But I want to build it with some combination of things usually this is like the kind of the two by two Combination of things I'll build just to kind of make sure everything works It's nice. Um, it's simple When it works, that's the problem All right, so I'll try to do a demo of this And let's see how far we can get and that is way small And we'll see how it works So basically what we're doing here is I just kind of made up a random derivation and said It's gonna be of those four different system types, right? so we got x86 we got Darwin we got Linux we got art64 and My machine is not necessarily configured to do those builds locally I don't have anything special set up to kind of handle them And so instead what's really nice is I say hey, I have these three other machines set up They're connected with a tail scale network, and I got all the keys and configuration has been done and those four builds got distributed out to those places and Whatever they're really simple builds that actually happened really fast They got done and they got transferred back to my machine all that transferring for example of like the builds the builds instructions the Results back to me right this is all depending on these invariants of the store right the store keeps tracks of references It keeps track of what that data is it keeps track of the signatures that might be on these things So all of that is kind of handled for you, and then you just kind of get to reap the benefits So that's kind of nice Actually, this might be a good spot to stop and just say with what we've seen so far. Are there any questions? Yes When so the question is what is it necessary to stipulate auto? That's a good point. I think it's just so that let's say you're trying to like template some command And there's a default that just works and does what the default is right if there was no name for the default You would never be able to specify it Basically Otherwise you'd have to admit it which is like now a conditional of some kind I think next was a user Yeah, so SSH NG is going to be a stand for next generation right? It is a different protocol than you originally had with just SSH where you're kind of talking what's called the Damon protocol It's basically going it's turning into the default because it has it it's kind of in all respects better There's a few slight differences, but generally if you're going to be doing this by default you should just be using the SSH NG just We can go into details, but I think it might be too much. Oh The question was the difference between SSH and SSH NG Yeah, so this is for remote building and oh, sorry So the question was a I have a hundred in engineers and I want to be able to like load balancing Can we do this via HTTP? so the build the remote builders are Basically, it's just one of one kind of a subset of store types Now we can't do builds on an HTTP Store because well, you know, you just usually the HTTP doesn't really support that mechanism to kind of send things back and forth Or we haven't implemented you know, hey, you do a put request and here's how it gets interpreted and all that but what you can do is say Use something like SSH to run the builds, but then to get the product of the builds the outcome of the builds that comes back to you via HTTP so I think what you might be thinking about is a substituter And we're going to cover that in a second in a little bit more detail. I Don't know if that answered your question though. I see okay So yes to have a remote build farm. So some of the abstractions that would be really nice For the remote remote system is to be able to kind of load balance and say here's Here's like one kind of proxy which is going to proxy for the other 100 builders We don't have that yet right now. You kind of have to have each of your engineers list all of them Let's say there's a hundred machines. You can maybe update that file automatically I'll show you kind of how you could do that But that one feature you're talking about I actually want that too which is to say here's your one proxy And it will keep track of all the machines and it's going to have access to everything and then that way the End users only have to do like a one-line config not a hundred line of all the machines keeping track of it and maintaining it But work in progress Not that I'm aware of So the question was is does that supported today? No, I think there's ways you can kind of hack it if you really dig into it, but basically now Yes, yes, so Yeah, the comment is Nick's build net works that way right so there's a way with what's the next remote You know configuration allows you to kind of hook into this system and do custom things You kind of have to know what you're doing Nick's build net has kind of hooked into that mechanism There's a few other people that have tried to do various kinds of scheduling efforts to make that effective So there's you can do it, but again, you have know what you're doing you have explicit support for it And well, we just got to kind of document this better as well a lot of this is not as well described as it should be So the dummy would be something where let's say You are only trying to do some large evaluation of something, but you're not trying to build anything So let's say you need to evaluate. I don't know some expression or for testing purposes You want to just test something that's basically why we implemented it was because you want to test some things quickly for like does do all These expressions evaluate to the same thing. You don't need a store for that You're just trying to test the like language and evaluation layer. You're not testing the store layer Most people don't probably don't need to know about it, but just interesting that there's one there That's just the dummy that has no store capabilities. It's just kind of blank Yeah, so the question is the next store has a database component that keeps track of bunch of bed and data Is that required for the store? The answer is yes For at least the local file system store that database is what keeps track of a lot of these things that you need You could remove that database and all your software will still work, but any Nix operations will not work so well Yeah, okay I'm gonna try to proceed a little bit then And luckily the demo worked I'm super happy about that Okay, so we kind of covered some of these things so Nothing to notice authentication is an issue. I'm just gonna kind of gloss over it But I acknowledge you that that's a probably you can you will encounter I guarantee it Next issue you'll encounter is signing Once you start pushing binary blobs around you want to sign things and you want to make sure there's a way to accomplish this stuff There's a mechanism for that again. This is in there you sign things with these Keys you could you can create the keys you could push them around There's a standard one that cash Nix OS org uses and during the default install basically you're all saying you trust You know the Nix OS foundation to run this and not you know send you malicious code That's like the core trust mechanism that does that if you are pulling that's implementing this in a company or in some organization Then you probably need to get more interested in how the signing process works Binary caches uses way to get started honestly is just the s3 one if your user has the credentials for it You can copy things into it. You just say hey, I want to push things to some bucket already set go If you want to get involved in signing here are some kind of some of the ways to get started You just say hey, I want to generate a secret. I want to convert it also to a public key So kind of have both Both sides of that key pair and like sign things one big issue It looks really small as that little little tack R on the signing that say you want that says you want to sign all the Things down your closure not just that one store path common mistake, but you can verify things There's operations you can do all this but we want to be able to save our work All right substitutors under the hood Here's how they're actually put and here there's no database the the metadata of what depends on what is actually in these Files called NAR infos your actual data is put into those NAR compression compressed files So let's actually take a look at one of these NAR infos The way this works is you've got some data about like you know what your store path is Where the actual binary is located how it's stored some hashes for the thing what references they have How someone can like what the driver is to reproduce it some signatures this is just the format It's it's not the best format. There's some work to like make this better, but it's It all it is what it is one cool thing to note though is the URL doesn't actually have to be a Relative so this could be somewhere else so you actually have your have your NAR info your metadata Be put into a completely different place than you have all of your binary data. I think that's kind of cool So I kind of came up with a crazy idea What if we actually put some of these things in different places, right? You can kind of put them somewhere else and If you want to go take a look I decided hey, what if I put them into Github releases, right? So Github get lets you put something into Your releases and if you go take a look at this I've uploaded basically exactly the same files you saw earlier if it's familiar and now you've created a basically a HTTP Nick's compatible binary store that just kind of works out of the hood. It's kind of fun. It's kind of quirky It's interesting It might actually be useful for some people right because you know what let's say you don't want to have an S3 You don't want to run some server and here's some company that gave us some stores for free I think it's up to two gigabytes or something like that but like hey who knows might be fun But there's flexibility in the Substitutor system and here's just one way to leverage that capability if you want to play with that. I have that under Tamburek Github store if you want to play with it There's a little app that kind of shows you that you have to like tag your release then you upload the files We have to do a little bit of said replace to make the URLs work right with how like nix usually does them So can kind of see what that is doing, but it's it's quirky. It's fun. It's interesting. Who knows all right? Let's get this back Okay, so as a recap Nix has superpowers right we have these core concepts we could do a bunch of like really nice things and We want to be able to leverage them and expose these like superpowers not just to like experts But how do we make it that like those hundred engineers that were mentioned a second ago all have access to this without having to learn All of it right so that means we got to make the configuration easier that means we got to make error messages better It means we got to kind of document this better means I have to discuss it better and Expose this to people right if it's just us using it. What's the point? I mean I want to be able to make it so that as many people as possible Benefit from this so that when your coworker builds something and Then you build basically the same thing because you're testing their PR. Why are you rebuilding it? Just use the same bits that they just used right there's entire organizations in like large companies that what they do is they go Hey, we're gonna ship around you know today's development cash to All of my engineers and they do this like every morning and they push it around. It's a 4 a.m. Job Like I mean this sort of thing is nicks just kind of solves We should just leverage it and expose it make it easier to use nicer use It needs a lot more work Hands down. I agree. This is something that just it just needs more attention into in the actual in the next code base itself And our documentation in the ecosystem. So if you're interested in these things like Get involved right that's kind of my my call to action here A lot of what I talked about today has nothing to do with the nicks language I don't think I actually showed a nicks expression at all. Maybe I hope This is all store layer things like this is a store layer This is reusable has nothing to do with nicks the language In fact, this store layer is usable by other systems and that's kind of the intent famously Goix Geeks used that like underlying store system and we want to make this underlying store system more accessible for up to other people All right, we just got substitutes remote builds and we did a little fun little kind of trick You could do with substitutes to kind of put it into an unexpected place, which I think was kind of fun questions No So now if if you try to build something and it's figured out that oh I need to build it and it contacts that machine to build it that machine will go. I've already got it Here you go So you do kind of get some of that but it's not technically speaking in the config a substituter It's just kind of behaves in a similar way or the outcome is very similar The question is if you have a remote builder. Is it automatically a substituter and the answer is Strictly speaking no, but it does behave you get the benefits as if it was One more question on the back of it So that's a okay. Do I have experience with attic and how's that played to us? So there are smarter Stores basically we'll call them and store implementations that are out there attic is one TVX has has has them and Basically like yeah, there's tons of additional features you can kind of provide which I think is awesome It's just that oftentimes the people implementing that kind of have to struggle with you know How easy is it to create that thing over the long period of time can be supported? What happens when upstream changes something and now we like break some part of the protocol? So the protocol's got to be better. It's got to be nicer again documented needs work So my goal is to make those things easier to build easier to maintain and easier than to provide those features too Well, you know the broader world That's the goal, right That's it. All right. Thank you, Tom. Let's go ahead and get the next speaker up Hello, okay, there we go All right this is our last talk before lunch and I just want to remind everybody immediately after lunch we're doing a Nick state of the union, which is a keynote with Ron and Ilco and I don't know who else But those are the ones that I remember offhand. So make sure that you're back here at I think it's 130 let me double-check the schedule. So I don't tell you the wrong thing. Yep. It is 130 So be back here on time for that because that'll be sure to be interesting, but without further ado Let's let the next speaker take it away. Thank you sound check. Can you hear me? Yeah, okay, cool my name is baddie I work at intuitive surgical and We make robots. So these are not the sort of autonomous robots. These are more medical devices where there's a person operating some system and So this is sort of showing an example of it On the right-hand side over here are the instruments that actually go into the body cavity this surgical team makes small incisions and then the instruments go in and Everything you see here Is set up so that the surgeon himself could potentially be in another room doing this the goal here is to Have the best possible patient outcome with the with as minimally invasive procedure as possible and the complexities come here in terms of as motivated and motivating factors for how we ended up with the stack we're using That need to go Need to be built to make this to work properly there's gonna be an example of the scale of it over here, but Some of the interesting things here is that these tools give you a bit more Degrees of freedom than our human risks at risks actually have So in this case, this is something This better better higher. Yeah, maybe okay So That was not sure if that helped give you a sense of what the actual thing that we tend to build is But there's some complexity complexities that come from this we're regulated entity. So Some the development cycle sometimes as on the order of months to years There are in this particular case There are many embedded targets that need to have their software built tested bedded documentation provided software bill of materials generated and Checked out the long these Systems are in use for a long time Long after the bits on the internet tend to go back In terms of development cycle where we use a mono repo which on single check out is roughly 200 gigs There's a lot of artifacts that go into there We use work trees each of those can be like last last one I have was about 30 to 40 gigabytes just as a work tree some of the a lot of the different components see a lot of churn some of them less and There are some external things that are these giant software blobs that have to be somehow passed along through through the whole system that Provides a lot of issues. We have some very specific toolchain requirements so We're at a point where The company has been in existence for 25 plus years now There have been several iterations of the products several different build systems that have been used Latest one is combination of using nix and basil both of these have some very specific properties that we're interested in Big one that basil has is that I'll talk about more bit more is that is granularity But overall we want to be able to build fast because then you can iterate fast you can test fast And have overall a better developer experience and ultimately a better patient outcome So some of the things that we need are To be able to reach really deep into the the dependency graph of the software that we are building and shipping We need to be able to cash different things because the big systems In order to go fast everything you have to have different layers of caching. We have to be able to Be able to do this without the existence of anything on the internet So if Google went down or github or whatever We should still be able to build our software so this talk specifically is Less about here is something that you can do right now to to build But this is more of a let's take all these different ideas that we've seen at different talks next gone and Show how we're actually composing them to to have a stack that uses them and As with all software we have it working everything's awesome asterisk. There are a lot of issues So it's a quick calibration for me kind of these are the general mix concepts that will have in this talk anyone not know What they are or should like is it helpful if I go into These as they come up in the talk. Yes. Yes, okay so Why Nick's and basil? Nick's we have Nick's packages which has a vast level of knowledge of how do you build pretty much anything? And if it's not a Nick's packages a lot of the core ideas of how to build something in a reproducible manner are there You can build things with auditors. You can build things that depend on C makes some things are just bear shell scripts You can build things. They're just files doesn't necessarily have to be an actual build system There is pretty good support for cross compilation, which is kind of critical for we're using there's all this like the last talk was about Substitutors and caching which are critical for how we're using this though. I don't really talk about much here Basil has really a good cross compilation support than different platforms. There's a good caching story But critically for us it has a very fine-grained Build a system where each of the different components are more or less content content addressable Nick's has some support for that, but it's a bit more experimental and it's not quite as mature as we're looking for So that's the ideas here is like take the best of Nick's take the best of basil smash them together and it works So that's the sort of cake we're had we're trying to go for so Nick's provides our Interacts interfaces with the the layers of impurity. So anything that we need to pull in from the outside world Toolchains libraries that sort of thing we shove them into Nick's and then basil has an interface onto that That allows us to build for a particular platform all the packages that it needs and then expose them to the basil library or application For that particular target. So more concretely. These are the different things so and So this is sort of the the theme of the next bit of the talk and I'm going to go into that more individually, but Some audience participation Anyone know what this structure of thing is called or shout shout it out a package recipe anyone else derivation a Function so a lot of this is these are all right in a sense We call this this particular form of something which is a function to a derivation a recipe and the reason we call that is because That has been a good A notion for us to explain to our users who are developers at the company who don't really want to know care or should care about Nick's What it is that's going on here and allows them to think about how we're composing the different layers together So what's happening here is that this is a function that takes something called a standard end that produces a derivation Its name is P name. So we're passing in the Parameters to the make derivation function Like parameter of the Nate the P name is hello version, etc. We have some string interpolation where This hello dot C is a path in the repository who then gets interpolated at sort of this build time to its location in the next door and it builds this hello application and then There's an install phase that then takes the output and shoves it into the target output So yeah, that's what we call recipe This is something slightly different What's going on here? Any ideas? I Heard overlay I Heard overlay I Heard overlays. Yeah, so yes, this is implemented as an overlay But we're what we're trying to capture here is more or less a database of packages that are available to our systems That will later be used in different overlays So in this particular case we may have different versions of some library or application Open a cell just to pick on that is You have multiple different versions of open a cell You may want to experiment with a different version at a particular point in time for a particular version of your hardware that you're Experimenting with say you want to go to market with one thing, but you don't know if this thing actually works But you don't want to negatively affect everything else that you're building that's in sort of production And so this gives you a way of exposing different things, but these are Lazy so this is not actually the the overlay for the AR-64 Linux or x86 or whatnot So the way we're thinking about this particular component is package database and Now I heard overlay for this sort of thing. Yes, this is what we're explicitly calling an Overlay or a package set because for a Given platform and this could be bare metal. This could be embedded Linux this could be x86 it could be a mean W based tool chain You want to have a set of packages that are accessible for that overlays are a way of doing dependency injection into the set of packages that you have and so this at when you build the target binary Will inject that inject those with the different dependencies So in particular in this case we have a some recipe for some something that has different dependencies system D for example Some targets do not support system D Some targets do some things Like you don't I don't know why you'd want to turn off open SSL, but sure this gives you the chance And then the overlay is where you can compose these for that specific configuration and This is how That comes in so when you import next packages There is a parameter to that import that you specify that overlay and you can have multiple layers of these overlays that then get sequentially evaluated And then in our case we have cross compilation support so target a target be whatever Who gives us the ability to specify what we're building for? We are additionally using like overlays as a an allow list one of the issues that Come across with the the mingling of basal and NX is at least with overlays it's a you expose your set of Attributes in that overlay, but they're mixed in with everything else and so when you're trying to build a particular version of say open SSL You want to you want the the bill to fail if you haven't exposed the right one or if it's not explicitly set in the overlay so we do some First things as someone was talking about yesterday to make that work So at this point this is our stack At the bottom we have these recipes then we have a packet database that then exposes all the different versions of a particular package And then we have a particular Overlay for the different targets that we're interested in and now we need to expose them to basal So this is where one of the So When we expose something to basal We need to tell basal in this output. These are the different files that are accessible Nix has a great feature Splitting the outputs into different locations on into different paths And so when you build a derivation you can you build the center the first one in your outputs But we often actually want the light the library files or potentially documentation that are not Packaged in the default So we need to essentially simling join them so that this takes all of the outputs Generates a new derivation a new package that Exposes them and at this point we are able to operate entirely in Basal there's some steps here that I'm not actually really going to cover on the left There's this register tool chains and tool chains next. That's where we tell basal that for a particular Overlay for a target. This is how you actually invoke it and how to expose those tool chains to basal, but there's some work there the Infrastructure that driving some of this is a lot of the work done by tweak where you can expose nix packages attributes to to basal using the nix packages without the rules next packages libraries and At the very top you have now your nice beautiful basal library that depends on something in X and it just works quote-unquote so this is type of command that Are like I ran yesterday more or less you have some some package you do a basal build you specify the platform and Under the hood that essentially calls down to nix to build Something like that that generates the output paths. It then goes through the overlay To wrap those in the form of a basal and then basal has access to the files there that are then linked into this package that we're saying and by changing the Platform we can potentially change the versions and so forth. So This is kind of a convoluted complicated stack you might ask how we ended up here and We didn't start with this We had a lot of the layers between nix and basal work. We just had the recipe and then basal interacting with directly with those and that worked for a while until We wanted to have a a nicer interface that develops could have with the systems and say this Without this the overlay You would have to specify for a particular library in basal the version of the library that you look at looking for and That doesn't scale for For the host of different targets. So we're trying to go. So, yeah, it's nice and beautiful Not really So some things worked really well if you've never used breakpoint hook in your next derivations for debugging Hardly recommend it. It's changed my life What this allows you to do is you add a Breakpoint hook into your native build dependencies or your build dependencies and as the builds progressing if it fails it will pause and you can enter directly enter the environment of the build and tinker around in a very mutable and pure way Having The different late access to the different layers both for nix and for basal has made debugging much simpler One of the issues that we have is we have many developers we have a binary cache that We push all our CR artifacts if everything runs on the next build pushes to the cache That's not public. So instead of having developers install Directly from nix we we wrap that up with a WNR PM or whatever package and that sets off all the configurations. They just install that We've been taught as we go through onboarding and talking to developers about Helping them do the development that they need to do sometimes there's a need to add a third-party package or something And they're not versus nix our job there is to Make it easier for them to both think about what they're trying to do and how to do it And so some of the terminology that we've been coming up with like we stumbled like we found that when we talked about recipes instead of derivations Things started to click for them very much faster Caching is great. We recently had a issue where There was things were working previously without basal mix by luck essentially due to the impurity on this the systems that were people were developing on and Things started failing when we when in the basal and next world. It's actually because we were catching it a latent issue that no one had So All the rest is meant with love. Let's put it out I Talked about the the multiple outputs. This was a problem until like we Had to until we were able to wrap this up So fundamentally wrap for basal is a so many join There's a lot of cross compilation here we Inject custom tool chains into into the overlays Sometimes we want to have a different tool chain build the entire set of packages we rebuilding the world it's For someone for people who are not very in tuned into how NICs packages work or very familiar with the ecosystem that can be a hard thing to wrap their head around Talking to users about what is this mean or why are why are we doing it this way? having better ontology to to describe the different roles For things and different patterns of things fit fit into and how they compose to provide The build that they're trying to tell them what we do and let's get this one We have an interesting situation where the the target outputs are actually not running on a system that have access to And we can't really copy the entire closure over So we need to do some interesting things to To make it work for so conceptually NICs requires the Interpreter to be present in the next door during the build But we can't actually point to the but then that infects it And so when you copy this over to the the target can't be run And so there needs to be some phase some phases that Split that out or change what that interpreter is modifications to patch ELF There's some Say one in particular is 20 gigabytes of something that depends on a lot of other things, but it's Depends in such a way that any modifications to what it depends on changes the This blob and so every time that changes new 20 gigabytes new 20 gigabytes It's easy to burn through terabytes and a development cycle. We don't have a solution for this Yeah, you know, you know Yeah, let's see. How do I say this? Python is great. I Hardly recommend poetry tunics for what we've experimented with There are a lot of different options Poetry tunics has been one that has gone a lot of things right as far as I can tell And I think actually, yeah You'll overlays an infinite recursion go hand-in-hand So, yeah, I'm actually just gonna jump to the end. I want to leave Room for questions. We're also hiring if you love NICs or reproducible builds Talk to me talk to my boss, you know I'll be around here. Please talk to me questions We We take a list look at the attributes that are in the overlay specifically and remove everything else Everything that's not and then yeah, there's another another layer that inspect that inspects it into that Yeah, the question it was Mentioned about how to use overlays as an allow list of the packages So I think I saw In the back. Yeah most of them only oh, sorry What are the expectations for our developers between reading NICs files or having to deal with basil? Most of them just deal with basil. They're a subset Maybe 25% ish That do enter the the NICs world No, I think they're in the orange sweater. Yes So question was how does what is the CI environment look like? Patches to basil so There we try and Push things to the cache as possible There's a lot of remote caching local caching and Disc cleanup on on on the fly It's a question here. Yes, so Correct. Yeah, the question. So does basil call into NICs or NICs call into basil. It's basil calls NICs Why we did that is? This started out because we wanted to find grained builds that Basil provides. Yeah, so like at the few lines of code There's more content. Yeah over there with NICs so everything is Intended to be indexed by the repository. So you should be able to check out the repository and Enter the environment essentially a shell that makes That and that gives you everything so we can upgrade basil You can re revert basil the developers only need to run our installer for NICs itself and it works so So the question is how are how do we integrate with these blobs on the basal side? So because we want to be able to Essentially check out that the version of that binary by Checking out a specific version of the of our code base. We have to provide it by NICs and And at that's where the friction comes on is really that iteration on the NICs side For people in unrelated teams Can impact this because of just how it depends on say Python? And if you change something that Python depends on which is a lot or even one of the Python Libraries the way it's structured is that kind of blows up But for how basil calls that It's Smart targets or rules that specifically say run the binary that this blog provides it It's less of an issue on the basal side here, but the challenge is that It needs to be provided The way we've structured things with NICs Alrighty, let's wrap things up there if you have more questions Meet the speaker outside or at lunch or wherever you feel like but we're gonna break For lunch now and like I said in about an hour We're gonna meet back here for our State of the Union with Ron and Ilco. So come back here for that Yeah All right, can you guys hear me? I'm testing this new my one two cool We'll get started here. So again Say welcome next con second day. We're almost done with it, which is a mix of excitement and a bit of bummed Because this turned out to be pretty enormous than what we initially expected and Give you a bit of a backstory We were thinking about this in September of 23 About six months ago, and then we said we'll do it And then we did it and then you all turned out So really everyone that was part of the organization team and making this happy the making this happen the foundation The leads just just super excited to see this and the biggest excitement is actually the number of hands that showed up of folks That were new to Nick's from the last year. So Those you know those are the folks that were excited to see her today excited to come and integrate into the community And hopefully you can give us that feedback that fresh set of eyes tell us how you know welcoming we are hopefully we're welcoming enough and With that said we do a next state of the Union. It's kind of like a Let's talk about what's happening within Nick's the different teams and their overall Overview and the way we run it is that Ilko and I will talk about a bunch of stuff Folks from the community will join in to talk about their areas And this is just an attempt to also give a good overview of everything as well as show some faces That are leading different areas inside the community and Really the biggest ask is Contributing right if you're excited about marketing or data or financials or design or obviously Nick's packages contributions or architecture It's like that's why we want you to know It's an open door. It's fully transparent and we want you to talk so with that said we're gonna start the next state of the Union I'm Ron. You might have seen me before I'm one of the board members of the next OS foundation and also the founder of flocks and We have Ilko here to do so I'm I started the next project a long time ago, and I'm a co-founder of determined at systems And I'm also the president of the Nick's OS foundation Next slide All right, so we're gonna have to do this because this is pretty big I wanted to say giant Thank you to everyone that took part in making this happen the volunteers the organizers if you can just lift your hand I'd love to just look at you again and say, thank you you out here organizers volunteers I see y'all come on come on come on come on come on. All right. Thank you guys so Huge thanks to the scale team. I have no clue how many folks are watching us live We'll have video recordings after this you can come back to it or not. I'm sure it's a probably a few million And again, we had two nicks practice so Ilko and I do this thing. It's uh, it's just so you know We're not AI. It's like that's me. That's Ilko So you don't get confused Cool Have a bit of a look at some metrics about community growth So this picture shows the size of Nick's OS releases or rather the the number of contributors and contributions I'm a bit incompetent. So I wasn't able to update this slide for the latest release but actually 2311 had 2100 Contributors, so it's actually all the way up here So yeah, the next packages and next to his community keeps getting bigger and that's really awesome to see next slide Also for people who care, I don't I mean there's some people care about stuff like GitHub stars that grew from I think I screwed up here If you click if you click one more button, you'll shut ah, exactly. Yeah, so 15,000 in the last two years number of merged pool requests in May 2022 versus Last month went from 3,800 to 5,700 So, yeah, it's an amazing number of contributions And they're getting merged. I mean we also have a lot of open pool requests, but But yeah So this is a picture that shows the number of Repositories created every day on GitHub that contain either a default dot nicks or a flake dot nicks or a shell dot nicks one interesting trend here is that Yeah, you see a lot of uptake of flakes, so We have something like 50 repositories projects a day starting with a flake dot nicks in them I think this is from the marketing team this shows the Google trends for nicks packages I I have no idea why it's only skyrocketed here, but that's great to see and maybe rock and explain that later Okay next one and now the Scary part The binary cache that's sort of the most critical thing that Our infrastructure team provides Everybody relies on the binary cache and it's it's been around since I think 2012 and since then we've never deleted anything from the binary cache. So it keeps getting bigger and bigger so for instance in September last year it was 346 terabytes in in frequent access, so that's the stuff older than a year that has grown since then to 399 and We also have 140 terabyte in standard access, so that's sort of the the recent stuff And it went from in that same period So something like seven months from 770 million objects to 293 million objects So that costs a lot of money it used to be completely sponsored but the company that was sponsoring that Stopped doing that. So since then We get some sponsoring of our S3 bill by Amazon. So we're very grateful for that But it's not complete. Yeah, we can applaud that But we still have a pretty big storage costs every month something like 6,000 euros So we're currently looking into garbage collecting part of the binary cache or maybe we can if people think hey I really rely on the binary cache never deleting anything When then talk to us, maybe we can figure something out Also extremely useful to us is the fastly CDN we would be completely dead without him So it's extremely efficient Cache so it has a cache rate of something like 98 percent so Can go to the next slide So this shows the weekly traffic to cash.niksOS.org has also been going up quite a bit So we're getting something like 750 terabytes of traffic per week And next slide That is something like 1.5 billion requests per week. So we should really thank fastly for their huge contribution So applause please Okay, cool So yeah going on from what I look it was saying I think the community has been Growing right the usage has been growing and we all love nicks We all want to continue using nicks as you can see we're trying to make things as transparent as possible, right? How much data we have passed through in a second you're going to see some financials? The idea is that you know as a community as we're coming into this technology this technology is Not yet fully, you know self-sustaining So help in that regard both either by contributions or financially or whatever it is hugely Recommended and appreciated and with that I'll kind of tie into the nicksOS foundation. So it's a little You know teaser into it. I know there's a lot of folks that are new to the community. What is the foundation? We only meet once a week in cloaks and then illco's basement. So other than that we're a pretty cool group But in reality the foundation has two charters one is keeping the lights on For today and tomorrow, but also helping to create a basis that makes the light shine brighter for the coming years and the coming weeks and If you move one more slide, I'll give an example of the financials oops slides are mixed, but one more One more. Thank you So examples of the foundation of kind of the actions that we take and then the way that we operate is in a way Where we currently have a mechanism that the foundation does not make technical decisions the foundation is there to handle the legal the bureaucratics the financials the Pretty much, you know the inbound of funding the fiscal hosting and everything around that clause but also the foundation is trying to build up a muscle and a mechanism to help unblock and Facilitate community level escalations. So the foundation is again not a group of folks that hide in some basement We really encourage you to reach out to us There are more than probably ten ways to reach someone from the foundation one is obviously in person at Nixcon but two is on matrix on discourse on email on everything and we Really appreciate that the other thing I would say is that? Foundational conversations there are no barriers. There's nothing that foundation is talking about that you can't take part of if you care about it So a lot of times there's assumptions that there's things that are not really talked about like ooh internal financials or Partnerships or decisions that are being made behind closed doors None of that is the case with the way we like to perceive things So if you care about it reach out we'll involve you you'll be part of that So and also give us that feedback so just some examples from 2023 and leading into this year was We'll talk a little bit more about the S3 grant So we did have a situation where a long time contributor Financially had to drop out The community did an amazing effort and we were able to secure funding again from AWS and partners and contributors and a Long-term effort around that there's a lot of building up relationships that we're doing right There's a maintenance cost to it like we need to keep talking with fastly so they don't wake up one morning It's like what is this bill going out for 50k a month? And we're like yeah, no, it's the cool guys at Nix Um Grant and event funding program So we launched that actually for the first time last year where Nix has a grant program If you want to do something smaller because we're not yet a multi-billion dollar community But if you want to do something smaller like a meet-up or or a get-together or some like Nix hack or buy a Sign for a conference that you want to go talk about Nix We have a way that you just fill in the form We try to approve them almost on a weekly basis and we're pretty you know happy to do so Um Those are not familiar a lot of the funding that is coming into Nix comes from a European project called NGI and I don't know if it will have a chance to touch of that a bit in a bit But that is a huge Program that allows to fund a lot of Nix related efforts So we're in the US North America lots of North American here in the crowd if there is projects or programs or grants That are also tied to the governments on this side that can be tied into the community and help support the community We'd love to do that right we don't have visibility into everything and the more we know the better so That's really big Another thing last year was we started a formal effort with funding for a docs for the documentation team Right making Nix easier to adopt bringing in folks that know how to write docs I have no clue how to do that. There are folks like Zach Mitchell that know how to do that And he's helping lead that effort And then a bunch much more the One-click Now what do we care about moving into 2024? So we care about quite a few things, but there's there's a few big ones one is we care about how do we grow as a community and Growing as a community means that we need to be able to bring in more contribution open things up more to newcomers But it also means that we need to be able to fund efforts There's only so much scale that we can achieve by hoping that you know Everyone here has two hours over a week and to go and contribute to it now There's gonna be some of us that are crazy and want to go into the nights with I see some folks laughing It's like you guys And do this as an unpaid full-time job and thank you But there's gonna be a lot of efforts that we need that funding for and Therefore a big focus for this year is how do we define funding inside of the Nix community? Up until now it's been donations and you'll see that in the financials that I'll jump to in a second We've had a major range of about $40,000 to $60,000 of euros of donations a year that is not enough to make Nix sustainable Which I'll also talk about So there is a big topic because some of that funding will need to come Potentially with a transactional relationship a donation is kind of like throwing cash or leaving it out our door and saying Thank you, but if we want to go bigger and we want to be able to bring in folks like for instance You know fastly or put them on the side as a logo or things like that. What do we feel comfortable with? It's discussions that we should have the community You know what we're comfortable with and move forward on those regards So that's one the second one that to your phone will be talking about is empowering teams and individuals Right, so we've been small. We've been hacky. That's been able to let us progress How do we move from that phase to a phase where folks coming in or just folks? Existing in the community for a while feel empowered to make decisions, right? Like I don't want folks to feel like oh, I'm gonna make that decision. What if there's pushback and that is creating kind of like Lowering the bar or or not allowing people to come in and make an impact so that's happening that also has to do with funding that has to do with teams and individuals and all that and And One of the discussions that I don't know if folks have been online recently But it also is tied to funding and one of those discussions are around sponsorship, right? How does the community feel about sponsorship and how can we create a policy that? Guess what is reproducible because that was that's what Nick's is as We grow and as we become more professional and more official and as we want to grow into a wider in-house name and Every developer and maybe even being taught in college or whatever it is We need to be able for the external perspective coming and looking at us to say I Probably know how this decision will look like in six months because there is a policy in place There's a sentiment. There's an expectation right when we're small when we're hacky. We don't need that We're like 50 people 200 people in a room and we kind of get each other when we're starting to expand to communities of 100,000 500,000 We want folks to feel welcomed and safe whether it's you know Whoever that person on the other end is and a lot of that has to do with defining things publicly making them transparent So that they know how those decisions are made and what they can do within our community That's a little bit about the foundation will I'll pause in a minute for questions and we'll do questions at the end But before that Tom, can you go back to the financials? Cool, so we're gonna be releasing another deeper snapshot of this We try to do this at least once a year It gives you a look at anything that's coming into the foundation and going out of the foundation These are the things that are on our books right from AWS to miscellaneous. So this is like Ilco's protein bars Costs but the serious costs that are not on the balance sheet that Ilco mentioned are things like FASI like equinex metal And those are really really big costs right so that means that if tomorrow morning for some reason one of these Sponsors donors decides I don't like what's happening in Nick's or I don't want to do Nick's anymore and they drop us Concerning we had that situation kind of come up with the S3 We were able to come together as a community and solve that but we don't you know We want to create a baseline. We want to create a a Treasury that is able to become self-sustainable one click Thank you That's the goal right the goal of the foundation today is to make Nick's self-sustainable meaning that we have an income that is Allowing us to operate with a 12 month buffer meaning that all of you can go to sleep Totally well at night knowing that your systems are running Nick's next OS next packages and all of that And you'll wake up tomorrow and no matter what everything is going to be fine for at least 12 months Even if everything suddenly drops a little sudden Do I want 24? Yes, but 12 month is usually enough I think we're a strong enough community. We can probably figure it out in that timeline. So That's what I wanted to talk about there. Let's skip forward one more One more. So the quickest three story for folks that weren't here Sponsor couldn't fund anymore. They've been funding it for 13 years, which is insane We sorry. This is a really great drawing. I'm really good at this So we actually you know sponsor decided off off we were notified we kicked off a s3 community thing Kind of like hey guys making a transparent. We're looking for solution It took us 14 days and those 14 days There were hundreds of community members that came together to think about short-term resolutions and long-term resolutions There were dozens of community members that offered their services for companies that they worked for and things like that And that's what it means to be an insanely strong community 14 days it took us to the time where AWS came in gave us the the funding and the sponsorship for 12 month and Allowed us to do what was important to start thinking about the long term of it How do we make the s3 more sustainable? Right and okay? You want to say a few words about long-term history? So yeah, the long-term plan right now is that we'll do a periodic garbage collect of cash. Nick so as the Roark So that kind of hurts because we don't want to harm the reproducibility of nicks packages and nicks us in theory, of course Hey, you can always rebuild ancient versions of nicks packages from source Except of course that those sources might have disappeared so Had that's why a lot of people still rely on Things like source star balls being in cash. Nick Swiss org so we want to go about this in a intelligent way We don't want you just nuke every Store path older than two years for instance So the current plan is that we're first going to delete old nicks us and nicks packages releases meaning Probably something like had nicks us and nicks packages releases older than two years except for the most Recent version of every release branch. So we would keep say the most recent version of the 1709 Nicks us release so you can at least still use that So those the releases that we keep those are the garbage collection routes. So every store path reachable from The packages in those releases. We will keep them But we will also keep all the fixed output derivation. So we will not get rid of things like tar balls. So you should still be able to get a version of nicks packages from 2013 and rebuild it from source because the sources will still be there. It will just Take a lot longer Yeah, so we'll deal delete the real releases or rather will Initially move them to glacier. So if it turns out that this was a really bad idea or somebody says hey, I'm willing to contribute $10,000 a month to make sure that you don't have to garbage collect then we can still Recover that stuff and then a little bit later. We will start doing the same for the actual store paths in the binary cache. So that's the The plan right now and we would probably Run this garbage collection every six months or every year or so we still have to figure out those details But yeah, that's the plan and so to go so to reiterate right now We're paying 6,000 a month for the binary cash that's on top of the 9,000 a month Donation from from Amazon and we have to fairly substantially reduce our storage costs. So something like 50% and make it long-term sustainable because The cash cannot keep growing forever probably So yeah, that's that's current plan Cool so again because of the show of hand I totally did not make this about two hours ago But if you're new to Nick's first of all I really do hope that you had a chance to build relationships he faces today that you're gonna be able to connect with and just build out You have the next few steps for you But in general and I know this has been said the community operates across discourse and matrix So one of those join in you can find any of us like most of the folks in the community on there Message don't think twice about it if someone can help out. They'll try to help out if they can't they'll find someone else that can help out So really, you know be open to it There's teams and leads you can find that on the nixos.org website if you know You're not finding the solution on matrix. You're not finding it on discourse feel free also to reach out to teams and leads There's obviously all of us at the foundation. There's also nix.dev for documentation and for onboarding and everything like that next Cool, so I'll take a quick moment there I've decided we'll do questions at the end let everyone kind of flow through so we don't run out of time and with that I'm gonna hand the mic over to Ryan. So thank you guys Hello there. Thank you So I'm going to present some few things of the community. I'm an expert developer in the community and also board observer for the foundation So We have the moderation team which has seen some growth We had some departure. We had two newer revivals. For example, we have two new members by games and Ryan and Rickson and Yeah, it's the moderation team is One probably of the hardest jobs you can have in the community It is very hard to balance out The opinion of each over and ensure that people can be productive in that environment and still feel safe and still feel feel heard with their opinion so I Really think that it is it is a team that we should support a team that we should be very thankful for And the team there is very hidden and silent in how they operate So thanks to them and they also next was contributors on the side you can recognize some of the nicknames I'm sure for for example, Python updates or make 92 Various contribution to the whole ecosystem I think we can go to the next slide So another big infrastructure another big team is the infrastructure team So we have the infrastructure team comp is composed of two sub teams The first team is the build infrastructure team the build infrastructure team handles the machine that will build all the derivation All the packages of things packages obviously, it's very I would say a bit sensitive because You have a signing key you have private key material You don't want to have your packages your your cash packages to be poisoned So it is a critical mission for the build infrastructure team to handle them and not all the members listed here are build infrastructure members So for example Jonas is building infrastructure Vladimir's infrastructure Martin is infrastructure build infrastructure, sorry and Thanks to them they've been working on Cleanup of the infrastructure repos three so clearly looking at all our infrastructure Machines saying oh we are paying too much for this machine We should probably consolidate consolidated into this over mission And so they achieved things like 700 dollars per month of savings, which is a really nice start in in a very Fast amount of time. It is very exciting to see them at work and and thanks to them And then we have another team which is the non critical infrastructure team So in the community we have many needs so for example Sometimes the release managers need a calendar so that they can inform the community of when is the new release and you can subscribe to that calendar for example, also the marketing team might need some infrastructure to send newsletter or surveys and those are what we put under the name of Non-critical infrastructure. It is a much more approachable team that you can join as a system administrator Or you can work with people to use NixOS for NixOS itself So you can see new members. We have plenty of new members in that in that new non-critical infrastructure and plenty of projects available in the get-up rip story you can find here and Yeah, we have a long-standing issues with our CI stuff. So Hydra for example, you had we had the workshop on on CI The scale of Nix packages is so big, but we have unique challenges appearing and I would say technical depth to clean up so we have a lot to do on performance improvements and That's that's probably what we're going to focus on for the next years. I Think we can move to the next line. I will pass the micro to rock Garabas Hey Lots of you are using Nix for quite a long time Some and quite of you are new what you're gonna face is you're gonna talk about Nix That's what I did and they told me this is marketing. So marketing team Yeah, so if you want to like we try to spread the word of Nix around the world What we do we cover all the social postings that we we try to do a frequent job The the newsletter which was revived but what will be put on the website very shortly The website itself we just we finally reached the point where we can work with the website a bit faster So and we need a lot of help with the website. So if you're If you know some of the CSS you don't have to do a lot Please help us. This is where we need and then we are also collecting certain metrics One metrics which I was promised to Explained was This one why this jump? Just you don't see it. I barely see it myself Started at 2012 the graph This was this here is this is where the growth started is in 2020 and then the big jump actually happened in May April May last year I have my theories. They are not to be publicly shared because apparently have strong opinions. No it's just Nix got picked up by a lot of social media YouTube content creators and That's the spike You most of you also heard about this now. What do we do with this? I think we are just on a good path. We need to provide good documentation. That's why all the work that's been there is very much appreciated and I mean come on this graph looks just good, right? It's like we're going up So thank you So I'm gonna do this pretty quickly because we're kind of starting to run out of time I think we're already over time. So this is actually a a slide from Sylvan who in finacil who wasn't able to be here Sylvan Actually sunsetted the architecture team a bit ago for a multitude of reasons where you can click on the link And then see it on the deck But he's in finacil is continuing to do this and continuing to work on architecture and the general last is that you know that it exists You know that there's a need if there's folks interested in this area, please reach out Please, you know come on in and and help make it happen and volunteer and I'm sorry that I'm running through it But there's multiple channels that you can do it Sylvan and finacil also does weekly office hours Which I can only highly recommend to you fun like Okay, thanks, so I guess I have to be quick too So very briefly we saw a bunch of teams and teams effort and all that One issue we had if you remember the graph like this great huge exponential growth is that well if you have this growth It's great But then it means that we need to have some kind of structure around that because otherwise new joiners just arrived and see these huge Things and what the hell do I do if I want to help? So that's one Axis that we wanted to put some effort on and we've started on that We're still going on to really try and give some more structure so that people can know how they can contribute so that They don't just don't know how but they have the ability to do that when they want and they can also know what's going on Regardless of whether they want to contribute or not which is this whole team empowerment effort To try and give some structure to the main groups that are acting around the different parts of the ecosystem being like different like technical parts nicks nicks PKG is nicks OS and the non-technical parts like The moderation or the marketing that we just saw earlier And that's also ties to the fundraising aspect because if like companies Maybe or any companies institutions anywhere might be glad to give money to the nicks ecosystem to do things But then we need to be able to spend that money Efficiently and that means that we must have some kind of structure to know how we can give grants to people or groups of people To do things efficiently and that's all ties to that subject That's me again, okay, so I'm that's probably the seven Exponential graph that you see it's always always a bit the same thing that one is the number of nicks contributors per year from 2015 to now you see it's a nice great exponential curve. We're all happy nicks is getting more popular But now we have scaling issues The original like nicks maintenance model was Elko doing a great job at developing that as a PhD student Obviously that doesn't scale when you have two more than two thousand the two hundred three contributors So we need to change the way we develop things. Can you yeah? And that's why so a few years back we created the nicks maintenance team whose goal was to improve the experience for contributors to make sure that they would want to keep contributing give like bring back good contributions and Eventually help maintaining the thing because that's that's always the trade-off You know, you have contributors who submit new features but then they need to be Maintained and you need to review them and all that and you need to have a good ramp path so that People can eventually help maintaining the thing and then the other problem is that I mean Ryan mentioned technical depth for nicks packages nicks is pretty good in that aspect too and There has been a tendency to just add new features to nicks Not always completely finish them because it's cool. We have something that works like I have something that works for me It's in nicks. I'm glad maybe it's not completely documented There's a few edge cases that people are not happy with but then I lose the motivation to finish working on that and And that's our second big big goal to really finish up all these kind of things sometimes small things Sometimes big things. I won't say the flake word because that can be a dangerous thing to say but we really want to try and Finish all this work in progress that we have and that's also something where we can get a lot of help there's a lot of Small ish tasks that can be accomplished as or like even if you don't yet know the code But even if you don't know C++, I'll be honest I've been working like the first time I've worked on nicks That was one of the biggest refactoring of nicks that happened I had never written a single line of C++ before and took me a while but eventually we got there So please do come and help All right, gonna keep doing this pretty quickly. So once a year we started this in 2020 To shit we're 24 we started this in 2022 We started doing a survey for the entire nicks community and those that are interested in this is the biggest survey that we've ever Ran so the last survey was 23 2500 responses 47 questions and It's online. It's on the discourse go and read it 92% are nicks users 8% are not not so surprised There's some data here. I'm gonna skip by it Lots of interesting stuff lots of very surprising things Bottom line what we're asking for do do do give me a second Pretty much what we're asking for is very soon the start of the next Nick's survey is going to get kick off one if you have Survey experience or are just curious join us. There's a channel on matrix if you're curious about things about nicks I want that question to come up into the survey then advocate for it and we can figure it out number two when the survey comes out It is insanely paramount that you help us get it to as many people as possible So that can feed back into the community so that we have data to base things on Gium kind of leads this he's awesome. He also wrote this about a few hours ago and thank you to him I don't know if we had anything to say about release management really quickly. It's coming up One to one. Oh, yeah, so about release management. So we have 24 24 or five The release is in by the end of May During the release management, we will call for release manager release editor release manager is mostly technical Release editor is mostly editing stuff editing the release notes making them cute making them cool So that people who will read them will be excited about any release and you can help You can you can join and you can like help the release community to to release nicks us And the last important ceremony of the release is the zero hydra fader so this is a ceremony where we call everyone during the beta phase of nicks west 20 20 405 and We will be on in the PRs waiting for new PRs to fix new packages fix new nicks us test and Everyone will try to merge them and get the number of faders Superdown and this is how we make we ensure that the stable release of nicks us are indeed stable Cool so you're already found out about this usually we do this on the first slide on the first day of nicks con But there is the nicks pog We still have a bunch left So as you're going throughout the day stop by some of the booths collect them and then maybe at some way You'll be able to buy something with them This is actually a picture from the first nicks compost COVID that Ryan helped organize as well It was even more hacky than this if you can believe it crazy Bottom line I think again we wanted to thank everyone for coming out here I'm sorry we didn't have time for questions But there's a bunch of other great speakers that we're pretty eager to hear and not just take up all the time on stage But we're gonna be walking around we're gonna be outside in the kind of like unconference zone So find us and hassle us about things that you care about and really again Just lots of love for joining this community or being part of this community and if there's anything we can do let us know So thank you guys. Thank you everyone for speaking and have a great rest of the day. Thank you All right folks We are going to start the next talk at 2 15 and then the rest of the schedule will just be shifted back by 15 minutes so unconference we'll start at 3 15 rather than 3 and Whatever else is on the schedule just add 15 minutes. You'll figure it out Hello, hello, okay Hello, everyone. Welcome. Thank you for coming Quick disclaimer before we start this talk is Mostly aimed at beginners So if you're a nicks OS expert you might already know a good amount of the stuff here But we still hope that you'll take away something new and be motivated to set up your own home server by the end of it So with that being said let's get started So Imagine you just bought your brand new mini PC and you want to set up a personal home server Well, first, let's install some software to it While it's still pristine and brand new So let's say we want to set up a media server to share your personal media collection with your friends and family Well, we can do that Let's say we want to set it up with Docker. We'll have to write a Docker compose file Get that set up put that Docker compose somewhere in our system Figure out what port this service is even using open that in our firewall and much much more. Okay, that's fine Let's say we want to set up a photo backup server Okay, that's fine. Let's write another Docker compose put this somewhere else on our system and then also open this port in our firewall Okay, sure Well, the more services you add you end up repeating this dance over and over again and by the end of it It kind of gets out of hand and if you're just a soul like server maintainer This can get very overwhelming very quickly And we think this is madness Today we're gonna be proposing a solution to this madness. My name is Anthony Tarbin and my name is Samir Rashid and We're presenting nix magic the case for nix on the home server So we're both UC San Diego students and let's get into it What are the benefits of a nix home server today? We'll be discussing how nix allows you to fearlessly modify your home server easily keep track of system state and make sure that your home server Managing your home server. It doesn't feel like your nine to five job So let's quickly go over some background. What is a home server a home server enables you to host your own services on? Hardware that you own you can host your own photo backup instead of relying on services like iCloud and Google photos and In addition you can host other things like a media library You can host your own website and store your own backups. You can find self-hosted services for pretty much anything you can imagine So is this going to be expensive? Well, no a lot of people think you need a beefy PC or you need some rack-mounted case But you really don't need anything you can use just a knock or a raspberry pi, which is pictured on the left or Just an Intel mini PC There's a Many good reasons to make a home server. I think some of the good reasons are you can take back your freedom You don't need to trust big tech with your data worry about Google axing some service You use or be at the mercy of vcs who are going to add put ads in your paid product Also, it's a lot of fun to play with software and have an environment where you can do anything So what's next the term nix includes the language the package manager and the Linux distribution? The naming system is kind of complicated, but basically this is just an ecosystem built around them nix package manager We'll be talking about the nix operating system nix OS Which is a Linux distribution which includes the nix package manager and allows you to configure things about your OS But you don't need nix OS to take advantage of nix you can install just the nix package manager to other OSes like Ubuntu or Mac OS You use the nix language to write a configuration So when you write this configuration It'll be written in the nix language, and then you can build this file to Declare everything that's on your system Building your configuration means that nix will install all the packages packages you specify generate any config files and Start any services which you want Unlike other package managers nix will do the same thing every single time You should end up with the exact same system no matter where you build it when you build it or who builds it This means you can write your configuration once and build your system anywhere So Anthony can you show us what writing a nix config would look like in the wild? Yes, so let's look at some real nix configuration so Let's start off with explaining that nix configs are just text files The nix language itself is a happy medium between a configuration format and a programming language This is advantageous because it gives you access to things like functions Data structures and variables. So let's look at this sample snippet of nix configuration so You can set up your system so that all your configuration is located in this configuration dot nix file and here we have a few lines Important thing to note is that we're modifying this system packages variable And this variable will store all the packages that you want nix to install to your system So by just looking at this I can tell that my system will have them and get installed Okay, that's fine. Let's say I want to add a new package. Well, it's as simple as adding a single line to this configuration If I add this line and I rebuild my system, I'll have access to something like Tmux Okay, this is cool. What else can I do? Well, you can configure settings of common services in your nix config Here we're setting an option for the SSH demon For context, this is just another part of that same configuration file. We saw earlier This means that you have full transparency and so what's going on in your system You can see all your services how they're configured and other stuff, too Here I can see that I'm enabling the SSH service and setting an option to block login attempts for the root user This is this is nice to have this all in here because I don't need to go digging for the SSH config Which is somewhere in my system. It's all right here in my centralized nix config Okay, Samir, can you tell us like how to get started on installing some services for our home server? All right, let's look in concretely at what it's like to set up a home server with and without nix First let's try setting up jellyfin, which is a popular open source media management software So if you want to install jellyfin on a traditional linux system, the jellyfin docs will suggest Oh, you should just curl our install script and pipe it into sudo bash but I mean, this is very suspicious. This script could be doing anything We checked and it's just adding their app repository and installing their package But in general the problem with running bash scripts is it can be difficult to revert their effects You would have to and you don't really know what it's doing You'd have to figure out what packages it installed what files it modified And try to undo that all of that manually On nix if you want to do the same thing to install jellyfin It's just one line you can write services dot jellyfin dot enable equals true in your config and Uninstalling this and removing everything jellyfin put on your system is as easy as removing this line But what is this line actually doing? When you build your system with this line in your config nix is gonna take care of downloading and installing jellyfin and It'll build it if jellyfin needs to be built This is going to use a jellyfin or this can be going to use a recipe that's inside of the nix packages Which is nix's package store So as soon as we add this line and rebuild our system, we should have jellyfin available in a matter of seconds Let's say we want to add another service like Samba for sharing files over the network Normally when you're setting up a service that needs to be accessible by other computers over the network You're gonna going to have to open a firewall port So on a normal without nix in order to do this, we're gonna have to first look up Okay, which ports are is Samba going to use and then oh what command I mean going to have to run to do this With nix all the state is in one place The nix is going to handle the complexity of opening the port with this open firewall option You just set that to true and then nix will open the port and do whatever it has to do and Then let's say one time one day later. We want to uninstall Samba If we uninstall Samba without nix then this firewall port is going to stay open and it'll have just left state across our system But setting enable to false in Samba and our nix config is going to undo any state change that has happened Okay, now we have a few services running Let's try and access these with some nice local domains rather than memorizing their IP address and port numbers So we can use any reverse proxy We'll just be using nginx so that we can route everything through normal HTTP port numbers So the first thing we're going to have to do let's say we've installed nginx without nix on a non-nix system The first thing you're gonna have to find is the nginx is configuration file So, okay, we'll look this up. We'll search this up on the internet But maybe in a few weeks I want to edit this file again I'm going to forget where this file is located and have to look it up So the first problem we're gonna face is that there's too many config locations Every file or every program is going to have their own configuration file and the This issue can bubble out of control as you saw in the intro all these programs are going to be littering your computer with configurations and dot files Okay, let's say now that we've found this config file location. We want to add a whatever any setting Well nginx uses a bespoke configuration language which they invented for themselves and only they use So we've faced another issue When you're setting up a normal home server, you're going to be faced with tons of different configuration formats Everything uses its own language or configuration format and it's going to have its own set of options and quirks which you need to memorize So nix is going to solve this for us to do the same just to set up nginx We can just put this snippet in our configuration You can see this is going to route all incoming traffic to jellyfin.local to whatever jellyfin's internal port number is We don't need to worry about where the config files are going to get generated where they're going to get put or have to learn nginx's configuration language So you might be wondering where we're getting these options from we didn't have to write any of this This is all offered by nix and you can find all of nginx's options in the nix option search So you can see here if we search for Services nginx there's a hundred sixty results which list everything we can configure with nginx So in summary nix gives you less cognitive overload you can define all your open firewall ports in one place all your system These services there's only one config format for everything and it's all in one place So Anthony what else can nix do for us? Okay, let's look at another really powerful Benefit we get from using nix on our home server which is reusability So by default nix configs are modular this makes it makes it really easy to take a part of someone else's config And just copy paste it into our config and things should just work It's as simple as copying a snippet of a docker compose into yours and just running it So setting up new software can be challenging You have to go use an installer you might have to modify some settings on your system I have to mess with some networking settings Etc. Etc. Also if instructions online are out of date you can spend hours and hours debugging why things are going wrong Let's look at a notoriously hard problem with Self-hosting which is setting up your own mail server, which some consider to be the final boss of self-hosting Mail servers are complex You need a demon to send mail another to host your mail client You might want some virus virus scanning software or a spam filter Well, thankfully with nix and the nix community. There's already did this amazing nix OS mail server project Which we can just use in our config The entire setup is just a sample nix config you see on the right There's no manual steps to run and if we just move this into our config We'll end up with the exact same mail server as the person who made this So it's not 20 pages of commands that you need to run It's just a dozen lines of configuration that you can add to your own personal configuration And if you can read it at the bottom the only command you'll need to run is nix OS rebuild switch This is so much easier than having to follow up projects potentially out of date instructions The great thing here is there are no instructions to be out of date because there are no instructions Once you experience the power of declarative configuration, you'll never want to go back Let's look at another example for our home server sending up a VPN in this example, we're going to use wire guard and The use case here is let's say we want to access our home server services from anywhere in the world But we don't want to expose our each service to the public internet Well, we can just connect to the VPN and access everything that way. So Similar to the previous example, there's a wire guard example online. It's linked at the bottom if you go to that nix OS wiki page So someone else has posted this and you can really really easily just take it and Put it in your config just have to modify a few fields with the private keys and whatnot or public keys And yeah, and again uninstalling is trivial You just have to remove this config or these few lines of config from your configuration and wire guard will just disappear and the great thing is for something like this is Nix will take care of undoing any complicated networking setup that we might have had to do To get wire guard set up and working So on the topic of reusability nix makes it really easy to reuse your config across multiple machines So imagine you want to ssh into your home server Well, you don't want to feel like you're ssh-ing back into the 70s and be greeted by antiquated diff A diff utility looking like this on the right. Maybe you want something like on what's on the left Which is called Delta a modern diff utility So what nix you can just copy your Modern terminal your terminal utilities to use them wherever you go and even on your home server Which is really nice So let's look at it another huge advantage of using nix on your home server Which is how it makes it painless and trivial to upgrade your server So let's talk about why you might want to upgrade it might seem obvious But you might want new features of your services. Let's say jellyfin releases a new version You want a new feature and security is also a huge concern Every day there's new security bugs being found and those same bugs are being patched all the time So you want to make sure you're always up to date so you don't end up being the victim of the next log 4j So on a system like kdebian or buntu what your home server might be in You might run a command like sudo apt-get upgrade and move on with your day and you might be thinking Okay, this is fine. But like what could go wrong? Well, it turns out a lot There are many things that could go awry Maybe the latest version of jellyfin has a dependency conflict or maybe there's a bug in a new release and You want to move back to a previous state Does this really happen? Are there really like issues with dependency conflicts? Yes, trust me. I know Also, you can see like many users online have faced the same issue Especially the one at the top which has been viewed 1.5 million times Over the course of 11 years since its instance bit posted Also, there's a notorious example of Linus tech tips Breaking his system when he tried to install. I think it was steam on pop OS so with apt so yeah so with nix or With something like apt it's impossible to roll back whereas with nix. It's it's very easy with this nix OS rebuild rollback command So yeah, you can always move back to a previous working state Some would argue that you could do the same thing with apt There's some commands like apt update dash dash fix missing, but those don't really provide a complete solution They're more of a band-aid To try to give a best a best effort to try to fix things Whereas with nix, you know for sure that if you had a working state before you'll be able to move back to that working state This is also really advantageous for a home server scenario because you want your home server to be up all the time And if something goes wrong during an upgrade you can really easily just move back to keep your services up and running Let's talk about like how rollbacks actually work or what they are So every time you build your system with that nix OS rebuild command, it'll create a new Version or we call it generation So here you can see on this boot screen We can see the past generations that I've built with nix and then when I boot I can switch to whichever one I want It's like having git for your whole system you can move back to working versions seamlessly Technically some might argue you could do achieve the same thing with things like file system snapshots if you've heard of file systems like butterfs The problem with those solutions is they manage every file on your computer not just System state like configuration files and application binaries Whereas nix won't touch any of your user data, which is really handy So how does this thing work under the hood? Well, it's just sim links all the way down When you upgrade nix won't overwrite existing packages. So let's look at this example here so let's say my current system is on the thing on the top system one link and I'm currently have a version of jellyfin. Let's say the stable version. Okay, sure Let's say I want to upgrade to the unstable one But oh wait, there's a bug or I don't know. I don't like how the UI looks now. I don't know well when nix I can really easily just revert back to the old version and It's just a matter of like changing the sim link to the previous Version of jellyfin, so this is a really powerful model and it also helps in the case of version pinning So let's say you want the latest version of some package But you don't want your entire system to have to move to unstable. Well when nix you can do this You don't have to go get onto a bleeding edge distro like arch where it's either all or nothing You don't have to bleed to access the bleeding edge Also in the case of having the most stable packages You don't have to move to something like Debian where it's also all or nothing and everything is just super ancient and stable What makes you get the best of both worlds And this is impossible to do with other package managers if you look on the right you'll see this graph here so we have a package that replete that depends on two dependencies or two libraries which have their own dependencies and Normally, it's impossible to have two Or the same dependency of and have multiple versions of that the two packages rely on with nix you can do this and Have as many different combinations of dependencies and packages that you want You can also see the little snippet at the bottom That's a little indicator of the kinds of things you can do So here we only want to have jellyfin use the unstable version And this is great because it doesn't mess up anything else on our system The rest of our system is still unstable and we can experiment freely and not have to worry So let's come so we've been talking about how great nix is let's compare it to a popular tool for managing home servers, which is Docker So Docker is great in the sense that most home server applications are geared towards Deliver or running win Docker The vast majority of projects already have a Docker image available. You don't have to do anything else You just kind of pull it and run it Whereas with nix the nix packages repo is great There's like a lot of packages But you might you know run into things that aren't there and you might have to manually package some things yourself We'll get to some ways to get around this later Also with Docker and it ends up Encouraging you to leave Docker compose files all over your system You might set up one project and then you want to set up another service and that's going to be in some other folder But nix you kind of get this benefit benefit for free where all your configuration is in one place Also with Docker versioning is opt-in You're kind of relying on whoever's maintaining your image to you know Make sure the tag it correctly with the right numbers and everything Whereas with nix you get this for free as well. It's super easy to roll back as we just mentioned Also, Docker is not really tied into the rest of your system Whereas nix is you can basically control almost anything on your system with nix so Also on the topic of Docker, you don't have to choose you can have the best of both worlds So there's nix options that actually let you start up Docker containers You know on boot and also you can pin Docker images to use a specific hash I ran into a use case Where I was using this trying to get this photo backup service called the image setup and Unfortunately image doesn't have options in the nix options so but it has a working Docker compose So I took my Docker compose and I use this really cool utility called compose to nix You can find the GitHub over there And I ran this on the Docker compose and it generated a fully working nix file That would just spawn up that same Docker container And this was great because I could just integrate this nix file into the rest of my system and Not have to worry about you know random Docker composes all over the place. So highly recommend you check this one out Okay, so Samir, can you tell us how nix stacks up to Ansible? Sure a lot of sys admins commonly use Ansible to try and achieve the same things that nix can guarantee But there's a lot of downsides with Ansible The first thing is that if Ansible does something wrong or something breaks when you're using Ansible It may not always be possible to go back for example You can't really undo an OS update with Ansible whereas nix has rolled backs built-in and it's completely free with nix Also, not every Ansible will do a it'll do its best job to take you between states But sometimes it's impossible to go from one state to another and you're relying on other people to have built mechanisms to do that So Ansible is notoriously slow every time you run it it will have to it'll have to check a bunch of things about your system Whereas nix is lazy kind of like me It'll only do the bare minimum to rebuild your system and take you to whatever and build whatever system you want to make If an Ansible build if running Ansible stops in the middle for some reason For any reason it could break your system since it's just modifying things directly on your system Whereas nix builds our atomic nix will be writing to the nix store and doing whatever work it needs to do But when you switch systems the sim link switching is atomic. It'll happen instantly and you can't be you can't break Nix is not going to update programs while you're using it. It can't break whatever is running And it's easy to drift away from Ansible state Because Ansible will allow you to just run arbitrary commands on your computer as usual Whereas nix limits what you're able to do and it's going to be controlling things using sim links to the nix store So Ansible is trying to bolt on determinism and declarative environments when your OS really isn't that reproducible But nix is locking things down and making sure that things are going to behave as you expect So talking about the good parts of nix, but there are some rough edges For example that nix packages is just made by everyone anyone can contribute So there may not be options for everything you expect for example the open firewall option isn't on every single package Which can be kind of annoying Also the nix user base is not that big compared to people who use containers. So there's less documentation Which can be which can be kind of hard if you enter an edge case and this is definitely not helped by Nix is confusing error messages Also when you do find documentation nix's community is currently split on whether to adopt the standard called flakes so finding documentation with the different standards can be kind of confusing and Nix's line of thinking works a little bit differently. There's a little bit of friction to do everything But when you if you do use the nix way of building your system, you're going to eliminate a bunch of problems But in summary is nix better. Yeah, I would say so Nix is freeing you don't want managing any server to feel like your full-time job and Configuration.nix is completely self-documenting You're going to get an overview of all the system state in one place and you don't have to worry about breaking your system when you make changes So let's take a look at what nix offers. You get fearless modifications with rollbacks You don't have to worry about your system breaking because you can always go back and it's going to happen rollbacks are pretty much instant Configuration.nix is just a text file which anyone can read. I can read anyone's system config and pretty much figure out what's on it and With reusable components You can reuse the work that other people have done and you don't have to worry about the quirks of setting up every single system package you want So you definitely should try this at home if you follow this link You'll come to our github page where you can follow some instructions for setting up your own server and following what we did basically in this presentation You absolutely should try this and if you want to get your friends to use nix. This is a great place for them to start without I Don't know killing their own laptop by installing nix os But yeah using nix is a vote for the future of declarative software and with that Thank you for attending our talk If you have questions you can ask them now or later in the hall Right, so the question was how do rollbacks work for when a program has some state on your file system like databases? So a nix rollback cannot handle that case because a nix rollback is not going to affect the database files that are on your system So if a program is checking if you upgrade your program and it's gonna do a schema updates to version 2 when you roll back The oh the program is probably going to check that the version is not above whatever the program is currently running So the program will probably just crash or not work So it's just on to you to make backups of your own user data. Any other questions? Yeah, the question was one benefit of Ansible is that you can easily work with remote systems and how does nix compare in that sense? Me personally, well, there were a lot of great talks at nix con. I think Was it today or yesterday? I think it was today. There was a talk on like how to remotely Build for or build for a remote system and deploy there So there are ways of doing it. I'm not too familiar my method of remote deployment is SSHing into my home server and running nix OS rebuild But I'm sure The community has come up with like great ways of deploying to remote servers servers Yeah, yeah, so it's pretty good support Yeah, over there Yeah, so the question was how does The nix style of snapshots stack up to block level snapshots like ZFS or BTR FS I'm not to fill me with ZFS, but I've worked with butter fs in the past I've kind of alluded to earlier the main difference is that nix doesn't manage or snapshot any user data It only really touches your application and service binaries and your the config files that it manages But if you have any user data, it won't touch it Whereas with like something like butter fs, you know, all your snapshots Will include like all your user data by default. There's ways of like ignoring that And you have to manually set that up But by default nix won't touch any of that user data. So yeah, thank you so much. Yeah Alright, thank you. We're gonna take a just a minute break while the next speakers get set up And then we will be on to the unconference so the unconference We will meet in here and for those of you have no idea what that means Basically, here's how it's going to work We will if you want to talk about some topic with some, you know Anybody else here you'll come to the front and we'll just like make a list of topics and then we'll have some Voting mechanism to see which topics are like most interesting to most people and then we'll just break out into groups either in like different corners of the room or out there or wherever there's room and Then you just talk about whatever the hell you want to talk about Okay, we both have one Test test test test. Okay. Hello everyone And I'm testing this one too. Okay. Good enough. Good enough All right. I want to thank everyone for coming to our talk The talk is called high and low lights of adopting nicks at looker and the rest of the talk will make clear What this is about? Cool, so our goal really is not to wow you at all with the setup that we're about to talk about and It's not exemplary in any way. There's nothing fancy about it But really it's to highlight What I would expect to be a very run-of-the-mill use of nicks and maybe to encourage you to adopt nicks at your company So like that's the goal You're not gonna see anything mind-blowing here Nothing shocking in fact probably a lot of the same stuff you might have just heard in some previous talks But we're doing in an enterprise setting and we found doing the most boring Flavorless thing proved to be pretty useful so that should be words of encouragement for you because some of nicks can be pretty daunting and Even without all that stuff. It was a boon Yeah, an interesting product so yeah, but we work at Google will mention that in a sec They have a huge sizable team that works to build amazing tools to improve the developer Productivity we came in through an acquisition, which will also describe. We didn't have access to any of that So we had to kind of fend for ourselves build a CI system developer experience that we Wanted to use that was actually friendly and kind of like that's what this talk is geared to obligatory obligatory our opinions are of our own and not of those of our current overlords Google Finally this talk is gonna have high and low lights Our hopes to be fair. It's a overall a positive talk, but we're gonna just share some of the like sharp points we faced adopting next Yeah, let me just add one thing to that Friedel already mentioned that they're that the team we're working on looker is a standalone product that is hosted in Google Cloud But it doesn't build with the same technology stack as the rest of Google and we don't have a dedicated build tools team So the same engineers that are working on the JavaScript front end and you know The JVM back end are the same people who might work on the build So we we need to have some sort of situation where we can control our own destiny and like solve problems That couldn't get solved in the central places because of the exceptions. We had to be using so Anyway, this is this is good because a lot of you guys are probably also in companies where you don't control the entire Build chain from the beginning to the end, especially if you're split up You know into different sub teams or an ops team and a building tools team. So Anyway, that'll be a central theme Cool. Yeah, why do we feel privileged to give you this talk? So first of all, my name is Fridt Zakaria. I'm an engineering manager at Google working at looker. I've been using Nix I don't know like six years seven It's honestly on and off because it works really well So I'll set something up and not touch it for a year. So I've been using it for a while, but not actively all the time Yeah, and my name is Micah Katelyn. I'm a software engineer and I've been working at Looker for a really long time before it was acquired by Google So I'm partly responsible for some of the terrible hacks that we now are committed to Okay, so I mentioned we worked at looker. It's a SAS business. It's an a SAS business intelligence product That works very very very very well with BigQuery along with some other Big databases but primarily BigQuery which is not a shock Honestly looker itself is not really important to that this talk We're gonna really talk about the technologies we've used and how we use Nix to Kind of bring on a developer productivity or improve our develop productivity and the CI But just some background. It was a startup founded in 2010 It was originally written in Ruby, you know, and it has grown to become a very large monolithic code base Mostly JRuby it has quite a sizable portion in TypeScript and a growing number of files in Kotlin as well Originally the developers were mostly working on Mac OS CI system was on Jenkins on Linux since the acquisition We've all had to migrate on to a Linux distro kind of managed by Google. So it's just Yeah, it's like Debbie and base but not really Yeah, so that's kind of just on the slide You'll see just the mishmash of different technology you had to support This is important because we needed a tool that didn't just work really well for Ruby a tool that didn't just really work for Kotlin You know like so we had to solve like how do we solve everything and at the time we did adopt Nix We were trying to we had some influx developers working on Mac OS as well Cool. So this was a slide that I actually developed when I Try tried to sell Nix internally at Looker. So this is a very very very old slide The text isn't really important on you can't even read it. Oh god on the right there Okay, it's just from the read me of our internal repo. So it's probably even better that you can't see it That's like page one of 20. Yeah Yeah, page one of 20 it had actual like code snippets you had to copy and run So it was a very complex read me bunch of steps. It had to choose your adventure adventure style so it wasn't like, you know, do this one thing from one to 20 it had branches because You know startups people bringing their own opinions to things. So people wanted to maybe use RVM or RB and so those are different Ruby version man Version installation managers. So I don't know why we wanted to support both, but we did same thing for node So it was pretty complex. You needed pseudo at a lot of points in the script, which made it a bit crazy And really we didn't have any metrics for this because why would we but it took us took an average developer Anecdotally about a full day to go through it With a lot of hand-holding, you know startups have we had a slack. It was a lot of engagement people would run into whatever kind of like error from the day and someone would help them and it was people really engaged and To be honest that sounds like a bad system, but it worked for like 10 years So, you know because it was a small company and everyone was like really into it. So yeah, it kind of worked well enough So I don't know the engagement was almost like a feature and not a bug of this like shitty development process So yeah, like I just wanted to kind of highlight like this kind of worked Like it was really bad, but it worked and it had a like enough collective fixes on all the scripts To account for all the like weird edge cases people ran into right on okay, so Free did a lot of work and groundwork to research Nick's he kind of liked it for its intellectual purity He liked it for all of its nice properties And he kind of socialized within the team that like we could use this thing It's mature enough for what we want to use And at some point we got to go ahead to staff a small project for a bounded amount of time to accomplish something And so I was a member of that effort where we just had a couple of months where we're going to do everything We could to like bring our build system forward and meet some of these goals So we had a goal to like make sure that our CI system and the developer workstations were consistent They were very much not before we started we made a goal to speed up that developer workflow for like actually Starting when you have a new developer and getting through that read me faster But we stopped without setting the goal of making the product itself be built by a Nick's derivation So we really invested in setting up a Nick's shell environment that could replace the developer workflow In which we could build our product and then we left open the door that we could Continue this work in the future if we want to go ahead and make a derivation that builds our final artifacts but we just we went two-thirds of the way through this set of goals and Declared success because we had actually delivered a lot of value there So next slide is going to just show kind of what we did on each of these goals I'll verbalize it because I won't have in your mind what we did before we go through all of these highlights and lowlights So step one a was that we set up Nick's shell a shell.nix that pulls in all the dependencies that a developer needs to run The build as well as durinv which manages setting the path whenever you switch in and out of that directory It pulls in all of your Nick's store paths and then throws them away when you leave So it's a really fast in and out Then the second bullet there is that we put this block of code in our CI jobs So that they also pull the same sort of Nick's shell environment before running running any of our legacy scripts This is a big deal because we didn't want our users to really even necessarily know that Nick's was happening because there's you know Like a hundred developers. They don't know that we don't want to teach everybody the workflow of Nick's at once So we try to make this as invisible as possible for those users All right then the the next big category of speeding up the developer workflow That was a huge success because it used to be that everybody's developer environment started by downloading a JVM Downloading some version of JRuby some version of node some version of npm Installing all the dependencies that are driven by npm installing all the dependencies that are set by your gem file They're installing all the dependencies for gradle There's just like this long list of work that happened on entry into a developer environment And now all of that gets done by a CI job that populates a binary cache So an engineer when they like boot up a new cloud top change into their directory It just pulls down a bunch of binaries from a from a local bucket and they have a shell within you know Seconds or maybe a minute the first time so it really did speed that up one day down to a couple minutes pretty good Yeah, so as a percent speed up It was like kind of a nonsense number, but like a thing that used to take all day now is something you can do you know routinely Yep Okay, so things become a lot more interesting when you consider CI jobs, you know I mentioned we work at Google like there's always some crazy bazillion flop numbers that are attached with that Company's name. I wanted to give a sense of scale like we were a startup got acquired, you know What was it? What was the scale of the number of CI jobs? We wanted to support like the system had to support that so this is a graph it is I don't know if you guys can see that number. I'll just tell you anyways. It's about 1500 Pre-submit, which is like the CI jobs you want to run on Coderview. We have a bunch of other jobs as well They're not included in this graphic So that's the median and the concurrent CI jobs concurrent CI jobs and it's a pretty bad meeting It's across like the whole month or whatever that time range is Obviously it like dips on the weekends because we have lives and we don't like to work on the weekends But that's kind of the scale we were working out with and to be honest that scale We did hit some roadblocks like you know, I don't know if any of you were here yesterday trying to do the Nix install Fest, you know, there was quite a lot of issues just downloading from github and hitting rate limits So we hit a lot of similar issues at this scale So whatever you hit with a hundred people at a thousand you kind of hit a little bit more Yeah, yeah, so we leaned heavily on those binary substituters and we and Nix already had support for using an s3 bucket as a cache and Google Cloud storage has a mode where it emulates s3 So we set up a bucket that manages all of our binaries and that thing can scale to any amount of traffic So it doesn't matter if a thousand CI jobs all boot up at once and all pull down the Nix packages archive And they all pull down a JVM that's hundreds of megs. It's fine Right GCS or s3 can scale to unlimited reads really so yeah one last point our security posture means we cannot use The public Nix cache so we have that disabled Which is maybe also a good thing considering I don't know if you guys were at the state of the union talk and the cost of the Nix s3 Yeah, that's a support the s3 cache This slide we could probably do this pretty quick It's just like a timeline just to see like what did adopting Nix kind of take at our you know medium smallish medium-sized company. It was like a Two-plus year it starts at July 2020. That's kind of like the research phase Looking at like can I even adopt this a lot of hand-holding a lot of white glove Mocking things out kind of like tool-by-tool getting slow people Honestly like the the implementation part was pretty slow quick rather. It was mostly a social the big problem was socializing it So like getting buy-in getting people to know what the heck's changing like should we adopt this thing? That's relatively niche Yeah, and it's kind of like a final notable mention is a Big problem point was I was trying to support Mac OS and Linux at the same time and That was really hard like And I'll talk to why and some of the low lights like some packages just became Really difficult to support a Mac OS notably chromium and which we had to pull into our graph Okay, we're gonna get into some now the highlights and low lights The Yeah, this is kind of like the highlight is we did some pretty boring nicks in fact Like we were talking me and Mike at the hotel after having listened to some prior speakers and thinking we could have done Actually a lot more fancy stuff. We didn't use any modules. It's not very like reusable It was what am I showing here like a Shell I mean we went a bit fancy and I don't know if you could read it We use the like no CC so it's a little smaller I mean it's a shell with some build inputs a couple of custom derivations, but like, you know, that's it really so In terms of like Fancyness, it's pretty low. Yeah, but the thing about us was that you probably can't read it But the the return value of shell nicks needs to be an attribute set There's a few attributes that name binaries that get pulled in and set in the path And then anything else that you set as a value there gets converted to an environment variable So this let us control the environment variables and the set of binaries in the path for the developer workstations So it meant that like every engineer whether they started whether they installed their environment today or six months ago Or if it was on a Mac when we were trying to support that they would all have the same version of the JVM They would all have the same version of node, etc So just being able to control the environment variables and the path and the binary path of this next door like was enough We don't need containers. We don't need virtualization. Yeah, yeah simple primitive highlight Okay in 2020 we chose not to use the experimental flakes feature And we didn't find it an impediment in any way that's kind of like what I'll say on that topic. There's a lot of back and forth on it. I don't know we just stayed out of it and It was a highlight. So that was a quick one Okay, so here's another good and bad We're calling it a low light But it was so easy for people to add the name of a package to this one Sort source file a text file that and then every engineer would have that binary it made it Really easy for this democracy of our chaotic team to like piling everything so pretty soon people like wow I would love to have chromium in my build. Oh, I'd love to have an IDE in my build Whatever so this thing grew and the closure over all the things that gets installed got to be pretty big So it was too easy to add things a bit of a Pandora's box that we got open Okay, this one is like I would give tech talks and I'd explain what nix does and how it gives you near perfect repeatability reproducibility, you know, whichever one you want to say and Some of the engineers their takeaway was pretty good and they'd come back to me and be like Great like I want now note at whatever some very specific version And I want another one noted this other version and I want it with this patch And I want it to link with this other version and they're explaining stuff to me and they're not wrong Like that's the problem. I spent a lot of time explaining to them that nix can do anything So they come back to me and they're like now they want the world and I'm kind of like fuck like you can have it technically But it's just me and Micah and I don't want to do this So we like the goal was to really have one nix packages for our sanity that we pulled from and if we wanted to do all this Other stuff it meant really blowing up our transit of closure So like I'm calling that a low light because I kind of I don't know I'm still grappling with like how to tell people to adopt this thing without them coming back and saying Like they want a bazillion things at very specific versions, you know, I guess the good part is you could give it to them But we I found it an impediment Sure. Yeah, so that okay. This is good and bad We repurpose an art that you probably all seen before but the issue is if you declare a dependency of a package That is defined in a particular nix packages and then later at another package and another version is introduced That thing may depend on different libc different other things So you if you go ahead and try and pin like an old version of ruby a slightly old version of node a pretty modern version of Chromium then you end up with like a really big binary nix store so It gets you got really big you got really big Yeah, another low light was you know again I mentioned we tried to keep it to one next packages, but we had to deviate it a lot of times and What was also frustrating? Many of the recipes aren't really reusable So we needed to tweak unfortunately a few things and it just you know We tried to contribute upstream changes so that those recipes were more usable so we can adopt them But I'll mention later us of pushing Moving ahead on nix packages was kind of a challenge so we had to often just copy out a derivation So there was like a bit of repetitiveness in our overlay and nix packages, which was also a bit of a low light So that's kind of the nix packages was like huge and sometimes unwieldly Yeah, okay another issue is We don't control the whole Shipping environment of our product and our team we were kind of asserting control over the build tools and the developer experience But another team manages the kubernetes deployments and the docker container images and the way they get tagged and We didn't have like in to in control So we we basically use nix to set up the build environment We build our binary artifacts within a nix shell that's under our control But then another team has their own processes and their own culture about like using docker container images and gcr And whatnot so we it's funny like we could we wanted to replace docker in our development environment But the operations team absolutely had no interest in that at that time and no appetite for it So we kind of like you could either use docker to build nix or you could use nix to build docker But we ended up with like two teams that each using it in the opposite direction and that's a cultural problem It's not really a technical one. Yeah, that was an interesting vibe like learning lesson for me it's shipping software is not always technical and You know, we've seen docker tools in a couple of previous slides here that you could build layered images But they wanted to build their own docker image like they being this other group And it's like how do you get past that organizational problem? I don't know. We're still pushing past that Okay, I'll do this one's a quick highlight and I couldn't get a benchmark So the top one's a benchmark of what our nix shell took It's like six seven seconds. That sounds good, but that's just like really long time and you know what? Typically normal developers are accustomed to changing a directory and it's just being effectively instant So seven seconds they felt like something's going wrong and and maybe worse if for whatever reason it starts building the derivation Locally because it's not in the cache or I don't know. They're the first ones to ever hit it So like Durham was effectively a necessity like I can't see anyone adopting Nix shell at their enterprise and getting people to use it Without that as a compliment. So yeah, I don't have the benchmark for it But is it like as fast as a race car near instant? Yeah, let's just briefly explain what that does when you when you run nix shell It parses nix shell nix builds the closure checks for things in the store all of that work took six seconds Yeah, what to do it once so every time you were to like start a new terminal and enter into this shell It's taken six seconds So that's that's no good for a workflow and durinv is another tool that Let's it sets up a shell hook so that when you enter a directory by typing CD and the prompt displays a Side effect of evaluating that prompt is durinv runs a shell script that shell script Takes a snapshot of the environment variables from the last time you were in that directory and restores them So it means that if you have ever entered the nix shell the first time takes six seconds But then any other time when you exit the directory and come back in it's just changing some environment variables And that happens like near instantly so this is like you lose a little bit of fidelity of your you know closure Because you just have the environment variables, but it's totally enough So that made that made a workflow the workflow is great Right Okay, cool. Here's another thing nix actually stores some state in your home directory and in a database in the store and not everybody knows like what which environment variables might affect like the Combination of hat the final calculation of hashes that become a cache key So we just wrote a script that like installs nix on your machine and another script that blows everything away Including all the subdirectories where we know that nix keeps state And then that way whenever an engineer says like there's a problem with my build It's not hitting the binary cache. We'd be like no problem just around this script And it would solve the problem. So it's like it's a big hammer But like it's so easy to go from zero to having a particular nix state that it's okay to just throw away the state And make it again. So I'm calling it a highlight Okay This seems like a simple one again like nothing shocking in any of this talks or our talk But having the hash name like those like squiggly, you know the Cryptographic part of the the name there incredibly powerful like it made triaging issues with the other engineers very quick I could just ask them to like Tell me you're the path or tell me like what, you know, run the command which whatever in this case It'd be a Ruby or J Ruby and I'd see what they see if it's different I already can like take branch a if it's the same I've already skipped a dozen steps in my mental model of Whether or not they have the same software as I do so that just like that visual cue huge It also gave us a lot of confidence that the CI system was running the exact same software and it was in fact hermetic So we would admit the paths that we were running Ruby or node and know that yes We're running or the final closure and no, yes, everything's the same and it didn't pick up some random nix comp You know setting or we forgot to set maybe We were using like anyways. Yeah, there's a bunch of different like idiosyncratic ways, which you can leak out some state Okay, let me take this one Because looker is an enterprise software We ship a binary to some customers that they keep in the field for years And then we maybe years and then other customers that we're giving updates every every month every week so in the fact that we were able to have our tool chain configuration and Checked into our same source code repo where our source is at means that if you need to make a small change to a year Old release branch you just change that branch and make a patch and all the binaries that go with that build are also in your path So you don't have to remember that. Oh, yeah that old version from a year ago only runs on an old JVM with a patch Right because when you switch your dev tree Shell.nix has that pinned nix packages and that's the JVM you get so it made it like actually possible for us to Keep building old branches long after like even maybe if the source code that we needed from an upstream thing was gone Not hosted anymore. We can still build because our mixed binary cache is durable So this is a huge lifesaver for supporting old release branches Yeah, this slide's made around a couple of times or at least twice. I think in Nixcon Today, but few people Understood Nix. Okay, I gave like I don't know a dozen plus talks at the company And at broader Google and I don't know what the acceptance rate was of like what I was saying to what they were receiving be like under 10% for sure and You know, I think this graphic which was shown a couple times just a bunch of stuff about Nix doesn't let itself to Become easily understood. I'd say the word Nix and right now. It's like, what am I really talking about? You know, I guess I'm talking about really all of it And then if I wanted to talk about just the language I'd say next so yeah, it continues to be confusing and You know since adopting Nix really I we haven't seen much increase in contributions from a few key people So I think that's something to keep in mind and I guess that was a low light Some things we called out was like lack of good ID tooling I know the language gets a lot of flak and it's not that bad after I've used it for quite a while But having someone jump in and it's difficult to navigate and understand the type system and Find where things are. Yeah Okay, this was something I hit because I said we kind of hit like some edge cases here So it's a bit of a snide comment about a real problem But basically missing source files for particular old Nix package versions such as patch files Could go missing and large packages such as chromium, you know are repeatable So there you could build them, but I couldn't build them on my machine You like sometimes you'd need a massive machine to build it So it's like great. I have this recipe to reproducibly build software but on whose machine, you know like on the the chromium guys machine he could build it I can't build it and I know Nix cache does cache some of the source files itself But I found a few cases where some patch file go missing and you get a 404 and it's like, okay That's like well, how do I rebuild this now? You know, we were kind of in a sticky spot because again, we were using a very old Nix packages, you know, I don't know over a year so Yeah, and we couldn't use the public cache So we were building things from source relatively old and missing some source files I have certain environment variables will also found to have an effect on the computed store hashes so We found like sometimes some developers would also miss the private binary cache that rely on Okay, I mentioned this one a bunch, but basically updating Nix packages like I hated doing it and I tried not to do it at all I would never I thank the release manager so much But like that's an incredibly tough job getting all Nix packages to pass I don't know what it we're at now like I look at that apology graph if any of you have seen it It's like 80,000 packages and Nix packages or something like that And like I said when you have access to that you just want to use more and more and more of it So updating it and then if you deviate a little bit becomes hard. So it was it was hard So I resorted to using a few, you know, I think we have like three or four Nix packages at different commits there That turned out to be the easier problem, but it would pain me so much time, you know when I would do that So yeah, yep Let me let me take this one the picture shows basically the three nesting dolls because we had these abstract environments that we only Controlled between freed and I this middle one we on the outside We have this G Linux distribution that another team controls We control the Nix shell of our build environment, but then inside our Nix shell We run other build tools like npm and go out and get things that developers have declared dependencies on so we Be able to like grab intellectual control about middle layer and deliver a huge amount of value for it But like the rest of the the rest of the nesting is like still somebody else's domain So it's a highlight because we were able to like actually work and add some certainty to that middle layer Even without controlling the outer layer We also got to do this in our CI jobs that we don't we don't have root on the CI machines So being able to install user space binaries in a controlled way super super big highlight I think it's also highlight like the developers didn't have changed their workflow if they used bundler or gem or npm that stuff You know pretty much worked and as you would say asterisk, you know like there was some Yeah, some finer points, but for the most part they didn't know they were using Nix and that was a highlight Okay, this is one of the maybe the last slide I'm calling it a low light like my experience in Nix packages and Ecosystem was like mixed depending on which path you take So like not all languages and frameworks in Nix packages are created equal It is an open source project. So it's like subject to contributions and what's there and Passion of the developers that are contributing to it So I found like C++ in Haskell Phenomenal like you know if you want to build a make file project like you're not gonna have a bad time If you're doing Haskell, I don't really do Haskell It looked phenomenal there too because there's a lot of overlap with Nix We were doing Java like Jay Ruby is built on Java like there was the JDK there But it had a lot of bugs there were like packaging a Java app. I wanted to look at it not really supported well had a bunch of weird kookiness to it so like Yeah, that was You know, I'm calling a low light because I didn't know that going into it I just it kind of felt like it had everything and some panacea, but Yeah, you know nothing ultimately was a blocker and I up sourced a bunch of changes and fixes So that part was a highlight It just meant, you know, like be mindful about which path you're taking and what language you're doing I think see what's out there and And and and yeah, know what you're doing Right This is like we have a bunch left to do it doesn't matter what if you can I don't even know if you guys can read what's on the screen, but like we mentioned we have a bunch lefts and We got a ton of value. So like even with all this laundry list left to do What's some of this stuff there? It's like probably the big thing is getting getting Nix and basal integrated like the talk We saw this morning. I was really earlier today. It was pretty great because we also use basal in our build But but we use Nix to install basal and control its environment And I know it's possible to invert that relationship and it's a different thing But we would like to continue that and get all the way down to the end of those leaves and maybe Overcome that organizational problem get them to use like a Docker imagery build So there's like a bunch left, you know, finally get Nix packages on the latest release like things like that But yeah still huge huge benefit Yeah, that's the point like even though that we only got halfway it's still it's still valuable and the team values it And it works every day and it doesn't crash Cool that concludes our slides happy to take quite whatever time we have left for questions here Otherwise, please find me or us I'll take a couple or we'll take a couple We got a couple questions. Oh Okay, we'll start Yes, if the developers got Nix fluent and I would answer basically a few they were like two or three people who like needed to upgrade a Package and then had to get deep into the recipe to make it work But mostly most engineers they just got trained to like type this one command and then your shell is good Two out of a hundred. So take that as your percentage. Yeah, okay You in the yeah you would point yourself. I don't know. Yeah Yeah, that's a good question. The thing we did had to Articulate is it's a pretty simple solution because it's all environment path based You had to start your ID in the next like in your current working directory Yeah, so thank you How did we get like how did we solve ID ease? Integrated development environments or text editors to use all this like fancy Nix stuff and Basically as long as you start the IDE within the shell Within the directory you're in it picks up the environment as a sub process So it had you know, like it'll find all the tooling from the path Nix sets up The one hiccup here is if you change to a really old branch Where your path changes like you'd have to know to shut the ID down and reopen it so that was a bit of a Pain point, but otherwise it worked pretty well over there Yeah, I guess the short answer I can come up with the yeah He asked what happens if the few experts in the team that know anything about Nix if we're all gone Then how would the company continue and the short answer is that all that code is checked in it's in our repo The CI system, you know will not allow you to check in changes that break the test So like somebody will have to figure it out like I can guarantee it'll keep working Like that's a selling feature. It won't it won't degrade. It may not get better But to answer your point like seriously, we haven't seen much more adoption in terms of people using it But also we haven't had to change it much So like once it's hit stability the big thing has been keeping Nix packages up to date and yeah Okay, this guy in the brown paint you guy in the brand brown pants. I saw you had your hand up right away Yeah, yeah Yeah, so the question is like how do we avoid accidentally depending on something in our build environment that's provided by that G Linux layer And then breaking the hermiticity The short answer is that our CI system doesn't run on G Linux So if you ever introduce some shell script that like makes a call out to some Ock binary that is absent from our Nix closure then that will fail the CI jobs that try to build the Nix closure so And just to be clear we're not in like Google's monorepo thing or any of that so we're Like yeah, it's a separate git repository. So that also helps Okay To okay, so we'll do two more You with the white shirt the next concert. Yeah Okay, he's asked about the Nix basal integration The short version of it is our Nix environment sets installs basal and it also downloads a all of the HTTP archives that we know that our basal works workspace file references and it puts those into a directory So that the first time you launch basal it has a JVM It has all the starlock rules it needs stuff like that But then basal just drives our regular Java build and produces all of our JVM artifacts that then get wrapped up by shell scripts Yeah, I'll just have the the part it really shines for me is basal and tool chains is like their tool changes If you've used basal they're to change tool chain solution is pretty terrible in my that's a personal opinion So it's a great way to bring in like your if you want clang the JVM whatever It helps that tremendously Last question mask I'll do a simple example. It should be pretty quick. Restate restate. Sorry restate the question. Oh restate the question Okay. Yeah, what are some of the issues we ran with with J Ruby Java? You would expect that maybe with building derivations, but not with maybe a Nix shell The one that really comes to mind is the Nix package contributors have done a really good job cutting away Like system state like so, I don't know the loader is patched away. So it won't read User lib like shared libraries things like that. So there's a lot of like patches to make things So they're hermetic the JVM was missing a few and they would come to bite you so if you had to use Java can do native interfaces and link to shared libraries and I found it linking to like a different lib C and then suddenly you're hit with a segfault So nothing like we couldn't get over but you're just again. It's like those two paths. I showed in the slide like oh god like Why am I hitting this error and it's like, you know, you're you're trying to triage it So it was mostly shared object calls and Yeah path leakage Cool. We only had anyone else has questions. Please find us at the unconference or outside Freed and and that's Micah. Thank you so much Hello. Hello, cool. Ah, so time for the unconference. So as we get started, I need two volunteers Raise your hand You and you great. There's a whiteboard out there over by room 106 go grab that and wheel it in here We're gonna need that for voting and stuff So like I mentioned before The way this is gonna work is that I'll open up for a few minutes To let you come write up topics that you think people would be interested in talking about or selfishly that you care about You'll come up here write those on the board I'll let that run for a couple minutes When I close that down, then I'll let everybody come up and put a little tally next to two topics that they care About so you get two votes. All right, and then we'll just take the top X number of those topics break out into different corners of the room and you guys can just go Ham on whatever you're talking about. All right, so that's how it's gonna work At four I need to go somewhere so somebody else will have to be the adult in the room while that happens But yeah, so that's generally how it's gonna work This is intentionally very unstructured so that you can just kind of talk about whatever Shit you want to talk about. That's kind of the whole point. So I'm just gonna keep riffing until a whiteboard comes in that door and Yeah, here we go Sounds like the whiteboard is off-roading Alright so I'm gonna open this up for Let's say three minutes for you to come write whatever topics you want up here on the board and then like I said I'll close that down Then we'll let everybody vote twice on the topics that they care about and then we'll just take it away from there All right, something I forgot about is You may not know exactly what somebody means by what they've written on the board So once these are all in I'll let people give like a 10-second explanation of what their topic is about and then we'll do the voting Hello. Hello. All right Get our last topic on the board All right, so Now let all the topics on the board If you'd like to come up and explain 10 seconds about what your topic is if it's not self-evident the time starts now Speak now or forever hold your peace Greetings. I'm Micah. I'll be taking a new user questions and answers If you feel like you didn't get enough or missed the session yesterday Tuesday Thursday Yeah Yeah, if you're new to nicks and you want the basics Come ask questions and maybe you'll get answers Anybody else need to explain their topic? It looks like yeah Yeah, so Module system Testing testing. Okay. This is always hard. So module system. I like it a lot. It's really nice to use Has a whole lot of application. I think there are two presentations And I still think there's so much more you can do with that, but it's hard to debug sometimes I'd like to talk with others So I need help because I run Linux on my calculator and since I used to use arch Linux And since that went end of life, it is no longer available. So now I'm looking at Nix as a viable option for building Linux on a calculator Hello, okay So immutable nix or as appliances system d people like to call it image-based linux 2 and contrary to traditional laptop linux installations where you'd have an install medium and then Bring in all the software yourself in that case We actually prepare an whole operating system image in advance and deploy that to some server But generally I would like to talk to everyone who uses nixOS in server-based settings and hear about your problems because I'm or we as a company too are trying to Work on a lot of pain points there currently Hello, I just switched jobs as you can tell by the hoodie. I'm trying to use nix at work So I put two topics actually one if anybody's using in production has the war stories the 4am wake ups They're getting paged all the issues Especially if you're on kubernetes want to hear all the issues And then if anybody wants to talk about using python well with nix would love to hear that All right before we open up voting ron has a couple words to say All right, so I don't know if we're going to be able to congregate back for a closing note But uh, we'll leave the survey up. We'll try to email the survey out as well Uh in general Yes What's a google form? Yeah, so I I don't know if the google form is that Oh, okay. So that's a determined system survey Uh, but good call out if there's other surveys. There's the the survey the first one Is like the official survey for the community. It's going to be published publicly and all of that So please fill it out. It helps us for for next year Uh other than that there will be an option for folks that Want to get a quick picture together for the conference only those that want to up in front Together that's something that we do every conference. So before we do the uh, We before we break for unconference. I'd love to do it And last note a lot of folks might not know there's a guy named r mine out in the netherlands who Was with ilco back when they were doing their studies wrote a paper on nix os back then he actually had an accident that That he's been recovering from for a long time So I thought in the sake of you know, there's a community out here and we care a lot about nix So maybe give him a shout out a little feel better. So I was thinking like on three we can shout out our mine feel better That cool with folks Uh He might be watching So there's like and if not we can always pull it out from the recording. So uh, oh, yeah, you know record it We'll send it to him Ilco is carrying protein bars back from him. So that was the joke earlier All right on three Our mine Our mine Our mine animal. All right Thank you guys. That's awesome. I'm sure you'll appreciate it. Um, all right So before a quick vote if folks want to come up friend, we'll do a quick pick together And then we'll do a vote that'd be awesome only those that want to take a picture together Oh, hello. Hello. All right while everybody's up here. You get two votes. Just put a tally next to it Just put a tally Hello. Hello. All right. So votes are in votes have been tallied. Here are the topics we're going to discuss Number one debugging nicks, especially modules. Good luck with that This corner front left corner of the room. All right next one is new user q&a That's going to be back left corner of the room. All right. Yeah, whichever direction that is Uh, next one is nix and python that'll be up by the podium And the next one is immutable nixos appliances. That'll be back over by that door. All right, we have two whiteboards First come first serve if you want it. Good luck disperse All right, uh, this will make life easier. You can take some of these chairs and rearrange them. You have to unhook them. Sorry You have to unhook them, but absolutely put them back where they were so that we don't make more work for the cleaning crew Yeah All right folks, it seems like we're all kind of assembled into our groups now I'm gonna let this just kind of run. You have about an hour before we have to kick people out Uh, I have to go Uh, I will be at scale tomorrow. Thank you all for coming out. Uh, hope you had a good time Enjoy