 So, just to start with the problem space that Nix is trying to solve, when we package things in a Linux distribution or in any packaging system really, there is this loop of state where a package outputs which are files, lands on the file system and then the next package picks up those files among new packaging metadata and uses that to build another package and this loop goes on and on and on and at the end it's hard to see really what was compiled or built with what and even worse it's not reproducible, right? So we have kind of two things that are trying to solve this problem on one spectrum is the containerization, so start from scratch and go through the state loop so many times and always end up at the same thing or we use Nix where we try to be precise and use purely functional language to be able to compose things together which should in theory give us a bit more precision and power to our pipeline. So, yeah, Nix language, I think it's almost 15 years old, it's a DSL so you will see that some areas because of that are like debugging are not a good story yet, it's purely functional, it's dynamically typed, it has lazy evaluation so I assume most of you are Haskell developers here so I will, let's assume during this talk and yeah, this is how you basically, you would get started, you install it through this terrible download through the internet thing that people tell us all the time it's a terrible idea but how do you bootstrap a package manager if not through internet, you're encouraged to use the GPG keys and so on and then the current version doesn't come with Ripple but you can install it and you know, you can get started, so let's go into a language a bit, I think it's good to build that baseline to some examples, we have a, you know, it does the right thing when it comes to dynamic languages, if you, you know, try to use operators that don't make sense, it will like error out, you can query for what type, what type of a, do you have and also you can search types and say, do we have a string, as I said it's a DSL for packaging so it has kind of special properties, one of them is it has a single quote, well first of all it has double quoted strings, there are, you know, the plane, the thing you would expect and then it has like this single, double single quoted strings which basically what they do is they strip out all the, the indentation you will see here that basically it takes the most left part of the string and strips everything out and including the prefix, it keeps the suffix new lines for example, and this is nice so that when you make expression and you nest them in your editor you don't have to like always strip that because in configs you don't want this indentation, right, so this is kind of the motivation behind it, sorry, just, just the indentation, nothing else, so it preserves everything else, so in Enix we have this something called preamps, these are the functions that come with Enix, you can see the whole list if you just type built-ins and you know we have something we call attribute set up there and for example there is a function called attribute names and you can get then all the keys in there and again if you pass it like a boolean it will like say that, you know, you're not doing the right thing with the right type. Functions, that's kind of the core of it, right, we have lambda functions that's the notation at the top, it has the same kind of indication as Haskell and there is, you know, the bad parts of Enix is this where if you don't put a white space there it will consider this as a URL, yeah, there are some bad, I didn't include all of them but there's some dark corners and this is really bad because well we don't have type so suddenly you know something instead of getting a function gets a string and you know debugging becomes a bit of hell so these are kind of the things I would like to do that Dix would clean up after, you know, through time but we'll see and then it doesn't have, well the pattern matching is basically you can have these attributes as a, you know, as a function input like this is a function that accepts A and B and then, you know, it uses the plus operator and here we pass A and B and we get four out, this is like the core of Nix where you pass packages and attribute sets in and then you do something with them, you wouldn't use integers, usually the main type is all attribute sets all the way in Enix, everything else is like a secondary and then there's also defaults in attribute sets when they're, you know, as a pattern match for the function so you can say that if you don't pass A it's true and then if you pass an attribute set, you get a false back but if you pass it in then you get, you know, the logical thing out. This is also kind of useful to set defaults when there's, you know, something not set, we'll see that later on. There's these, you can get all of the inputs in the arcs and there is one caveat there, the defaults are not included so if you have a default like this and you don't pass A explicitly in the arcs will be, will not include it. That's another trick that, another tricky thing that, yeah, hunts you and then there's some, you know, syntactic sugars around it like if you, if you, you know, we have the let in, okay, that's pretty obvious if we have an attribute set like upstream we can create another attribute set by, you know, using the dot something more from the guys using pure script or Elm and the, you know, the shorter version is to use inherit which basically means take the upstream attribute set and then, you know, inherit A and B from it. Any questions so far? All right, pretty basic. And then there is this recursive function that basically you can have an attribute set that references other, other values within itself instead of doing this let in at the top. Yeah, it's pretty basic. So, and there is this with which basically means declares the namespace. So if you have an attribute set and you say with this attribute set, then you don't have to always prefix the attribute sets and you can just go we usually use this for a big package set. So you don't have to say packages dot blah, blah, blah. One thing that's really tricky with with statements if you have with X and with, you know, epsilon the first one prevails after the second one. So it doesn't override, which is really an intuitive and you have lists. Yeah, they're without comments. So anything you do in here you have to put in parentheses. Otherwise you will, you know, you won't get if you have a function and you want to, you know, call it inside the list as an expression you have to put it in parentheses. Otherwise you'll get results. As I've said, it's a language lazy language. We have these imports statement that basically means it doesn't have modules at all. You basically import everything that a file returns or a function returns in a file. So if we have a file and be a nicks and be nicks, you can cross cross import. And if you use instantiate command, you can evaluate it and so the type of is the type of what it was outputted. Yeah, yeah. So yeah. And then you can say, you know, then you can use the attributes after you get the return variables. This is kind of like the core of how packages work through nicks. Yeah, yeah. That's usually, yeah, you get usually what you do, you have a top level function in the file, which is like an attribute set where you petromancer attribute set and you know, you kind of get the inputs and then you return an attribute set of packages and things you want to get out. It would, it wouldn't fully evaluate, well, in this case, B and in this case, A, right? So it would just say function or something. It would just say code. Yeah. Tongue, whatever. And we have paths. One of the, another one quirk is if you want to refer to a current path, you have to do dot slash dot, which is, again, something, if you, if somebody doesn't tell you, you say dot and then it's, it's, it's like an attribute set X's, I think, and then weird errors happen. Again, nicks was never meant to, to be successful and you can see that in some places. And yeah, we differentiate between strings and, and paths and I have, I have a block post in progress that kind of explains, it's pretty actually complex how, how nicks handles the difference. So I won't go into that now, sorry. Maybe, maybe we can, yeah, maybe, maybe I published that block post and, and you can read it and tell me if that makes sense. And then it has kind of three ways to do exceptions. One is, well, just forget the try evil for now. If you say a board, that's like, you know, a board's evaluation and there is no way to, to, to go around that. There is the throw which you can actually then catch with try evil and it says, you know, either success is true or false and then either you get the value or you don't get the value. And this is what we do in X packages to evaluate each package because if one fails, we don't want the whole thing to fail. And then we just print out the error and we go on. And then there, there's also the assert, so assert expression and another expression that follows and you can also catch that with a try evil. Yeah, this is another gray area, a dark area where it's kind of tricky a lot of times to, to, to be precise of, of what errors you want to catch and what you, what errors you don't want to catch. There's not enough precision. And then there's, there's buildings like read file and, you know, if you have a file foo and it has contents bar, you actually get that back. So the question is, where's the purity, right? That it's, it's not, we don't have type. So where's the, you know, where do we declare purity? So Nick says a bit different model than Haskell. So, yeah, let's, let's look at that. So for now, we have this basic language that has, you know, attribute set lists, blah, blah, blah. But it's very pure. We cannot interact with the world, right? It's pretty boring. Now, now this is where, where, where the, where the gist of it comes in the derivations. And the derivations, they produce build products. And, you know, from, so from Nick expression to a derivation, this process is called evaluation time. So we take Nick's, you know, files and you create, you evaluate them, you get a derivation files. And then if you realize derivations, you get build products. So it's a two step thing. And the reason why that, that's the case is you might want to, to evaluate Nick's, get the derivation files, copy it to, you know, distribute it over machines. And then you want to realize them. So it, it helps. That's why the, the, it's a two step thing. So the derivations are basically intermediate representation of Nick's expressions fully evaluated. So, oh, let's say we have a very simple C program, which basically all it does, it gets environment variable out. And then, you know, it creates the file. We compile it. And this would be our most simple derivation up there. It requires three fields. The name, what system is it going to, to, to build on? And what's the builder? The builder is an executable that creates the, the, the output path. That's the, that's the minimum thing it has to do. And down here, we see this process. So this is evaluation, which, you know, goes from Nick's expression into derivation file. And you see this long hash. And then Nick store minus R realizes that and builds it into, and calls the builder to produce the output path. So for those that use Nick's, Nick's build is basically, you know, those two, two steps combined essentially as a convenience. If you're building and realizing on the same host that you use Nick's build. So, so then comes the question of where is this hash coming from? This hash is a hash of all the inputs for, for the derivation. Right? So it's basically content address. And, and, and this is where the purity comes in. Derivation is built in a sandbox, like separated for the file system, disabled networking, everything. And, and all the things that are allowed to influence that be that builder are coming from the derivation attributes. This is the purity part. Okay, I've got completely lost. What does instantiate do and what does store do? So instantiate takes a Nick's file, evaluates it into a derivation. Well, derivation is basically this internal representation of, of what Nick's is going to do. It's, yeah, it's, it's hard to show it here. I'll show, I'll show it later. And, and then Nick's store minus R is, it takes this intermediate representation and actually goes and executes the builders. So realization part. Yeah, so you, you would want, you would want to, for example, let's say you have a, you know, a lot of Nick's expressions, you want to, to, to evaluate them and then distribute the builds, right? So you wouldn't be able to distribute the building part if it would be just one, one thing. Or at least it would be harder to do so, right? Yeah, or if you wanted to change, if you wanted to construct a derivation on one machine and build it on a different type of architecture. Right, because derivation is basically for a dependency tree. So you can write, write an algorithm that takes the leaves and, and, you know, like distributes them. Just ask, what's the concrete form? What's, what's the sort of concrete form of a derivation? It's fully evaluated. Yes. But what is it a fully evaluated thing? What is it? Is it here? Um, yeah, it's a good question. So Gabriel González wrote this really cool tool. Let's see. Part of, part of this talk is he wrote a Haskell parser for it, but it's basically it's kind of instructions, right? So you've reduced the whole problem to a set of steps. Yeah, you'll be, yes. So it's like all the output, it's going to create all the inputs it needs. What's the, the builder is going to call to actually build things in the sandbox? Well, you know, what is the environment? What are the arguments to the builder and so on? So everything Nix really needs to build this thing in the sandbox. So you've effectively constructed a code base that you execute with the builder? Yes. And this is, this is, you know, hash, this is, the hashes are actually hashes of all these inputs. And this, this is the purity part. And when you say of all the source of the derivation of the, of the Nix expressions and, and the hashes of the things they reference? Yes. Yes. It's kind of like implicit because all the inputs you get already include the hashes in the string. So it's, yeah. So if I, if I reference it to build the file, yeah, it'll get put in here with a hash, which is the hash of it's code. Yeah. And, and, and yes, yes. And then it basically knows that, you know, you have different hashes and it, it goes. So it forms a dependency graph. So the way it constructs a dependency graph is basically it scans for hashes through, through the derivation file. And, and the same at runtime. I'll explain that a bit. Does that answer your question? You seem to be in part. Okay. Well, maybe, maybe as we go on, it becomes clear or we can go back and revisit this. So, yeah, this is not the best way. But sorry. And then, okay. And then, then comes this, you know, very common problem in package in the bootstrapping, right? Now, okay. In, in, in this case, we, we took the, the compiled, you know, file as our builder and that was our, our input, which was, you know, pure, as we always get the same builder. Otherwise, the hash would change if the contents of the touch file would change. But now we, we want, for example, to compile it from source, right? And then you get into this bootstrapping problem of like, where do you get G, G it's GCC from, right? So you have to start somewhere in bootstrap. And this is where Nick's packages come from. You have, we have something like standard environment, which is like the baseline. So we take binaries of GCC and so on and bootstrap the, the basic derivation with the binaries. We compile that again to, to, to from source. And then you get standard environment, which includes the, the bash builder and GCC and coroutils and all of the, the standard tools and the make, the make file is there and so on. So quite a lot of things that you get out of, out of the box. So, and then this is more like something you would really use from Nick's packages. Okay, and here it becomes tricky. This is like the, the typical pattern or how everyone to call it where we import Nick's packages. And then we, we, we put it into our namespace so we don't have to say packages that standard environment, blah, blah, blah. And this, this is the part, this is the function that, that calls the primitive derivation. And, and you know, Nick's itself doesn't come with any packages. So it's very, you know, it's meant to be a general purpose thing. But to do anything useful, you need to have this environment to build things in. This is, this is, yeah, we would need a bit of time to go through, through all of this. I'm sorry for being a bit vague on this one. And then the, the make derivation kind of is, is, except these more high level attributes like you can tell it where the source is, and you can tell it build phases. And the way this works is actually it's a long bash file that basically execute bash functions. And these build phases are basically bash functions that are then like, in line into this whole builder bash process. Of course, nobody stops you, you could basically use Haskell as a builder, and you could like pass it in. And, and all of these attributes are passed in as an argument to the builder basically on the common line. So nothing stops you from, from creating Haskell builder, if you want to go so far. There is actually a GUIX, which is kind of like, I wouldn't say competitor, but a friendly experiment, which, which uses Guile to, to go all the way down. So it uses Guile as a, as a, you know, the, the language for, for describing the, the derivations, but also the builder is, is written in Guile. So you have the same language all the way down. So that's, yeah, we're, we're here, here, next goes the pragmatic approach and Guile, and GUIX goes a bit more extreme. And, and here with the make derivation, you already get GCC and so on. So you would, you would be able to use that. So, so with the changing the touch C in this case, make that derivation or? Yes. So every time you would, you would change the touch C basically, this source here, you know, just the content, the content of this source would change. So the whole, the hash would change because of that basically. Okay. So it is the checks of the directory that is used. Yeah. And that, yeah. So all, all the inputs, right? In this case, source is, is the whole. If you had a different file in there, it wasn't even referenced in your build process. Yes. You can change that file in rebuilds. Yeah. Okay. So like for local development, if you have local things, like, you know, like an editor, an editor config, or so on, you have to filter that out, otherwise it becomes part of the hash. I hope, I hope that's a, it's a lot to cover. So I cannot, you know, we would need a couple of days to really go through all the details. And there is there, there is, these are content, you know, content address derivations, but there is also a fixed output derivation, which basically means instead of calculating the hash, we give the hash to the derivation, and then you can allow network access because anything you get can be hashed at the end, right? So this is how you, this is how you do networking then. And, and again, we have a determinism by, by this hash that we provide. And the factory rail would use, you know, curl in the background, in the derivation, download it and then assert the whole, the whole output has this, this hash. That's how you get a copy of, I don't know, the Haskell. Yeah, that's, that's our, you know, that's our IO basically, it's based on the fact that you, you, you assert the So your IO is basically linked by my workspace. Yes. Pulling this from the, and then, by the way, the checks some of what you get is this. Yes. So, so what, what this allows us to do really this, this hash, hashing is that I think for the very first time, we have a packaging system that is source and binary, right? These hashes uniquely identify how our packages were built. So we can basically ask, ask a service, Oh, with this hash, can you give me the binary package? If not, I'll build it from source. So mix is, is transparently it's, it's source and binary at the same time, right? And we, we use this binary, we call this binary substitution, which you can use SSH or HTTP or something like that to ask for the hash, or you build it from source. And this is what we do in IHK as well. We have the binary cache. So actually you can get, you know, a 30 megabyte compiled Haskell binary for most of our commits that, that are in, in the GitHub. And, and I think one of the, one of the, you know, major, you know, things that next really, you know, invented or discovered to be precise is that we have this pet allocation like memory management, you know, when we went to high level languages, and you can say, give me some memory and all those stuff. This is what nix does with paths, right? You say, you know, give me a path with, with food or text name and, and with the contents bar and, you know, you have it. And then you can nest that and it builds the dependency tree. So this is how it, you know, collects what belongs together. And, and so there are two garbage collectors. One is on, on the language itself, you know, if, if things go, you know, with things are not references, it's garbage collects them. But there is the, the second garbage collector that actually, as we, as we allocate these paths, right, there needs to be some way of, of saying how do you clean them up? Because it's, it's, it's immutable. It just creates them and then nothing really, you know, deletes them. And there's the, the garbage collection part, which I don't really talk much about in this workshop. But essentially what you do is, is you define a strategy, how do you want to collect things? You can say like, I want to have 10 gigabytes of, of my disk free and everything else is, is, you know, should, should stay there. And, and it, it has this concept of GC roots of where you define, you know, you link the, the build NICS, NICS expression into a GC root. And that means it won't garbage collected. And it's, it's, it's better to, to read that up. It's, it's pretty, it's not important really. We have terabyte disks, so whatever. Yeah, then, then there's this concept of entering this environment in which the builder will build, build the derivation. And this is where, where NICS shell comes in. This is the, the development tool, right? So, if I say, if I, if I have a, a default.NICS file, which is like the convention of what it will use. And if I say, you know, enter the hello attribute. And this is a derivation in that file. Then instead of, you know, actually taking the, the inputs and, and, you know, building the, the derivation in the builder, it will just take the inputs and enter a shell with all of those inputs present. And because our, if our builder is bash, of course, if you would take Haskell, then you would have to implement kind of your own shell. I guess this, then it would use, you know, the, the ghci as a, as a, as a shell. But in this case, you, you get into a bash environment, all of the inputs that would, would be used to build the hello package. And by default, that inherits your environment. So it's convenient for people if they have editors and so on, that, you know, they're still around. But you can say pure and then, you know, if you, if, if whatever is not declared as an input to do the hello derivation will not be there. So, you know, um, now we're going to, to a part where, where things become very blurry. Um, for most of the people, it's like, how do you then use this, you know, functional language, uh, as a, as a way to, to override our packages? Like that's what we kind of wanted, right? If we want to have this precision, then we better use it. And let's see, let's see if we can cover this quickly. So a very typical package, um, would, would be something like, um, as I've said, uh, an input up there that you pattern match on all of the packages you will, you will need to, to build it. And then you would use the standard make derivation and then use those packages as you go, um, across the different stages. Um, and the point is now you can override these inputs or these packages that you're getting in. Um, but first we need to build a framework for that. So the, the first naive thing you would, you know, one would do is, um, you would have, uh, an attribute set of packages, you would import them and then like fill in all of the inputs. And, and these inputs would actually come from this, you know, top lever attribute set. So it, it would like fit them in. Um, so for example, input one would be here, another, another package and input two would be another package and so on. So of course you cannot have cycles, right? Um, but that's how it, how it, how the top level mix packages basically, if you open that file, you will see a long list of packages like this. Now the problem, the problem here is that every time you have to, you have to like, every time if you rename input one, for example, you would have to rename all the packages that get it in and so on. So it's a bit cumbersome to do that. Um, so what, what mix offers is this, uh, um, building called function arcs where you can give it, uh, a function that has a, uh, an attribute, you know, that pattern matches on the attribute set and it will tell it, you know, what inputs does this function have. So, um, it will say, oh, it has an x, which, which, uh, doesn't have a default. This Boolean means if it has a default or not and it has an epsilon that has a default. Right? So, uh, you know, it's, it's a bit more than just that, but what call package, something you see in x packages very often does is essentially that it, it extracts all the inputs for this package and it just, you know, figures out, oh, I have to fill in the x input and the y input, uh, into here. So you don't have to be as explicit about it so you can pass an empty attribute set. But if you want to be explicit and override an attribute set, you can just pass it in and everything else will be, you know, kind of like, uh, reflected from the function arguments, but this, this one explicitly will be overwritten. Is that, is that too magical or, or because the, to define call package, it's, it's a bit of work. But hopefully you get the idea of, of how I will give at the end a, a reference to Nick's pills, which is kind of like the, the long version of what we're going through. Um, why does it return false and true? This is, this basically means, does the in, does the x input have a default value or not? So this is the, this is the default. So if, if you say a question mark, something, then if you don't pass in, you know, epsilon, this will be the, its value. And, and this is so that you know, yeah, sometimes it's, it's useful, but in, in the, in the call package, I don't think that's really used. It's more just what inputs does it have to, to, to fill in. Any other questions? And then, then what, what call package does is it adds on, on the derivation and that's a couple of attributes. One is the override function that you can basically now take this derivation and override it input. So as you see up here, right, these are, these are the inputs that we, we kind of used for, for, for that derivation and, and you can override them. So you can basically, what you can do is you can take, I don't know, GHC and you say override, use different version of Lib, Lib GMP and you have two GHC is now one built with one version of, of, of the library and one with the other. And you can basically use this overriding mechanism to be, to build, you know, different, different, different sets. So one of the things I did in the past is basically we, we wanted to test, we wanted to benchmark, software and we basically did a huge metrics of different kernels of QEMO versions and so on and basically use that to benchmark the, the software, but basically by just overriding, which is something you, you cannot really do in Docker or tools like that. And, and this one actually overrides the, this, this attributes here, here that's passed to the make derivation, doesn't say. So yeah, the, this attributes here that it's passed to make derivation. So you can either override the function up here or the attributes had passed, which, which can also, you know, be, you know, there, there can be let in, in, in, in between and it can do, you know, some concatenation or something. So it can, it can have different values. Yeah. So this is like the, the, the high level of writing. Then there is this, this blog post, a gift by Russell O'Connor in 2014 where, where he goes to explore how do you do dynamic biting in NICs, right? So the problem is if, if you have an attribute set like this and you use the rec function, then, you know, X here will, will come from, from, from this definition. But if you, if you use, if you merge attributes at, which is something that, you know, you take an attribute set and another one and you merge them together, then, then, then this rec function won't have an effect anymore. It's, it's only, it's local to this attribute set up here. So you will see that the X here will have this overridden form def, but the, the, the rec part that was evaluated here actually evaluated this X in, in the regional attribute set, right? So then why, why is this really important? Because we, we want to have a package set and we want to add new things to it, but we don't want, we don't want the, any reference values in the regional ones to, to keep the old, the old values, right? We want the whole thing to, to actually, to, to be precisely overridden. So what we do is we define the, the fixed point or the Y combinator, like, right here and I, I won't go into, you know, how that, most of you probably already know if you don't and, yeah, there's, there's enough materials out there. So what, what it basically does this, this fix function then gets basically the return value as an input and then you can basically, instead of using the rack, you do self.x and then what you can do is you can stack these attribute sets together and, and you get them as an input. I'll show this a bit later how it works, but you know, this is kind of like the, the baseline. This is the, this is one of the parts that, you know, it's, it's kind of hard to grasp, but, but the fixed point is kind of the, the build, the build, the, the, our helper here and there is a bit of other things that, that he does in the blog post, trying to build a framework for, for, for this overriding of package sets, which we'll see very soon. Yeah, I, I haven't bothered to go through the whole thing. It's, it's pretty long actually, but you'll, you'll see how, how it's used then. If you, if you, if you know that it's a fixed point, I think that's enough to, to to understand how it works. Okay, so, so this is the overriding part. How do we overwrite packages and how do we use then, you know, the licks language to, to, to really, to, to get the, to solve basically the dependency hell, the multiversion hell, and, and be precise about it. Yeah, so, so there is, there is one part missing here which, which I'll show a few slides later on how, how that really then is used to merge the attribute sets. Okay, now, I want to now move into more how, how Haskell ecosystem uses Knicks. First of all, there's the stack to Knicks integration, stack with Knicks integration that most of you use, if you use Cardano. And basically what it does if, if you say in stack.yml file enable and you know here are the packages, then what it does is, is behind the scenes it, it calls stack, you know, if you call stack install, it basically executes, you know, Knicks shell, it basically constructs this, this common line and then it calls stack again. So that's, it, it repasses the, the arguments. So it's basically the same thing as you would, you would say Knicks shell and, and these two packages and then you would call stack. It's just declarative in the stack file, right? And you can go a step further to say, okay, don't like list the packages, but actually use a Knicks file and then it's the same thing as say Knicks shell and then, you know, executing stack within that. And the purpose is, of course, to provide a system libraries to stack. The problem with, with, with this approach is that stack, stack knows precisely what Haskell packages it wants to build, but it has no idea about the global system state. So every time you, you change basically Knicks shell, anything in it, it will like recompile everything from scratch because it's like, oh, I have no idea how this affects, right? My, all of the build. So the best thing to do is to start from scratch. And, and that's, yeah, really unfortunate when you have a Cardano, which has like a 360 dependencies. If you want to build full thing. So yeah, let's, let's get into a bit of infrastructure of Haskell and Knicks. And there is this Kabal to Knicks tool that basically parses Kabal file and translate it into how Knicks would build that, that Kabal file. So it takes Haskell infrastructure divide, defines another make derivation, which is like one step, you know, it, it wraps the make derivation for the standard environment. And it defines like Haskell specific attributes that you can then tweak. Like is, is, is this, should we build the library, should we build an executable, what are the dependencies of the library, test executable and so on. And this is all extracted from the, from the Kabal file. And, and all, all of the values used here are also in the closure of this function here so that we can override it, right? So that we have this power from the port that we can override this up here, this function up here. And that, you know, reflects then in the make, the derivation built for Alex in this example. And then, you know, what we do, we go through the whole package and, and call basically Kabal tonics on, on, on all the packages. And you get this huge attribute set, right? In, in the past, before Stackage came, we, we kind of like Petty used to, you know, try, try and build things together. Now we just default to the latest Stackage and then we, the rest of the packages, we take the latest version. So we basically curate the packages that are not in Stackage because we, we want to have the, the full latest set. We used to have the full history, but that was like, you know, it would be a couple of megabytes right now. So it's doable, but, you know, if somebody would want to do like a community project, it's doable to build all of this. I think we, we calculated if you want to build like with profiling and without the profiling and the whole, it would be like 20 or 15 terabytes of all together to have the whole package in all the common ways to use it. So it's, it's, you know, it's not unreasonable, but nobody has done it yet. All right. And then, okay. So we have this, you know, very high level Haskell attributes that we understand as a builder how, you know, how that reflect, how that affects the building. And then what we do, we define these combinators basically that, you know, take the derivation and then override it and may, for example, check means run to run the tests. So we have a do check and don't check combinators. That means you can take a, so what it means is you call cabal tonics to generate, you know, the, what it gets from cabal, but then you, you can say, okay, enable the tests or, or disable the tests. Right. So if, for example, if you have NTL package, you can say don't check and you get a new package back that won't run the tests. And then you can use these combinators basically to, to compose and tweak all of these attributes here, more or less. Okay. Any questions so far? Sorry? We're being very quiet. All right. Yeah. I don't know if you, maybe, maybe it's to vague or maybe understand everything now it's, I've been fooled for several months I began to understand it, I guess. Okay. And then, okay. And this is, this is where our, our previous fixed point comes in. Where, where basically what we do is, if we start down here, this, this is an, this is a a package attribute set of all the packages. And then we extend this with next specific configuration and with common configuration. So the difference is, this adds the system libraries, this override some of the versions that, you know, depend on what the versions are in the package. And then we have a compiler specific on thing, package set config, have no idea what it is, and the overrides, this is like your overrides that you want to apply. So what this does, it uses a fixed point. And then it uses, you know, it basically, it's kind of like lenses. You take the original packet set, you apply a bunch of overrides and you go on and on and on. And the way it works is, you have two inputs, one is self and super. So super, super is for example, in next configuration, super is Haskell packages and self is next configuration itself. So it gets the, its own return value as an input. So you can either, for example, if you override something, you can say like, take the package from super, or you can take the package from our own attribute set that we changed. Yeah, that's, this file is a bit simplified, but that's basically what the infrastructure in Haskell does. And then, you know, it's all stacked together. Any questions about this? It's really just a lambda function with two arguments. Yeah, it's... I would sort of have thought initially that was like a lambda which was sticking in another lambda. Yeah, it's basically two lambda, so you can, you know, do currying and, you know, it's two lambda. So it passes that in, I mean, this make extend, extensible and extents is kind of like the, what uses the fixed point then to pass in the preview set and then the set itself through a fixed point. Any other questions? And then, you know, this is then at the end, how you would use all of this, what we covered so far is you say, oh, I want the Haskell packages with compiler 8.0.2. So for every, we take the package and then we pass the compiler in and we create a new package then. So for each compiler, we do that. And you can get this and you can get this package set by basically saying packages, dot and compiler, including, you know, JTJS or whatever. And then you override this set and you say override and then now you have the access to the whole thing and you can say, for example, okay, in my attribute set that I'm interested in, I want to build stack, but in that dependency graph, I want to take the MTL, but don't build, you know, don't build the executed tests, for example, and so on. And then, and here comes all the precision where, where, you know, we, we override our package set. I see some, some faces are confused. Some faces are like, I guess it correlates with experience of Nix. We would have to do like a three day workshop really to go. But you know, it, yeah, it takes a time. The major problem with Nix is it does so much and it covers so many things that it's really hard. And it does it all with attribute sets? And it does it all with attribute sets, yes. And then the, and the, the functions that operate on the attribute sets are, you know, not as precise as you would, you know, you have them in Haskell, for example, right. So there's, there's legacy. Okay, so now I want to cover a bit how we use Nix in IHK, just on a very high level. One of the first things that when you, you start using Nix is, it, it, it uses this, you know, lesson and more signs, which basically means the take, take this Nix packages from the search path, which means this Nix path and bash environment variable. And this is, you know, this is a pattern that goes through the language packaging. I don't know who invented it, but it goes on and on. And it's, it's this, again, this global thing of the environment that, you know, it's really hard to figure out what's going on. And like in Nix, we have this channels that basically then provide Nix packages and you update them and, and, and, you know, it's all a bit stateful. So I, I recommend usually not use that at all. What we do in IHK, I mean, actually this was, this was a joint effort by a couple of people to come up with this idea. Is we, we have this Nix packages minus source.json, which, you know, pins down Nix packages to a very commit. And then you, you import this file that's present in all repositories that basically uses a derivation to, to import the Nix packages and then you import that and you, then you have the Nix packages to be used from their own. So there is no this global thing, but actually what is used is part of the Git repository itself pin down to the commit. Yeah, the, how it's implemented, it's, it's, I want to cover it, you can like look it up, but it's, it's definitely, it hasn't improved our, you know, results a lot in terms of determinism compared to this, because like when you have a CI you have to be careful that Nix has the same packages as you locally and so on and so on and so on. So it's kind of cumbersome. Then the second thing we did, we wrote this tool, cool tool called Stacktuning. So what it does, it basically uses inside, inside infrastructure of Stack to, to build the whole package set that Stack would basically build. You know, either it comes from Hacket or it comes from a local folder or it comes from Git. And then it calls Cabal to Nix on all of those and builds our own package set, basically. And, and why do, why do we want that? We want the developers to use the same package set as it did using production. So we at least have some, you know, guarantee that this was tested and used, right? We don't want that the deployments just use a completely different set and, and then we start finding our bugs. And this is the, in the famous Packet Generator pattern that you have to use to, and this is a bridge. So we want developers to use Stack but deployments to use Nix and, and this is our bridge between the development and the deployment world. It was recently rewritten and it's, you know, quite fast now. It supports Mac OS. It doesn't have this Cabal revision and non-determinism that you were, a lot of you were, were hitting and so on. One of the, one of the things we want to improve is to, to be able to override the compiler right now. It's hardcoded to 8.0.2 but, you know, getting there. And then, yeah, then Cardano itself provides a Packet Set again, which is directly translates from what Stack uses and you can say Nix build Cardano SL and you would get, you know, the Cardano SL package and so on. This is, is the, you would run this inside the, the Cardano SL Git repository. And for example, we wrote the connect scripts. So now you can actually build a full node and this would basically build a best script that would run the node and you just run, you know, for example, if you want to connect to mainnet, you say the first line, if you want to connect to staging the second line and so on and so on. And there's also, we have this top level attribute server where you can like, you know, override for example, you know, topology file, for example, for people who want to use different topology file and so on. So we're now able to build different kind of best scripts and the next step is going to be testing them, right? We'll take a virtual machine, we'll run, we'll run all of those inside and make sure that they all work, right? And you know, you can also use Docker. We also have a Docker function and you can use basic, basically it uses the connect scripts in the Docker. It just, you know, provisions the Kerdana inside and then use those connect scripts and you know, this is what some of the, our partners use to build a Docker image. And then, and then the, one of the questions was, if I remember correctly, what I would like to cover is how do you, you know, develop package in X instead of using stack? There is this attribute on each package called environment and this is what you can like enter into. So for example, if you want to develop on the wallet, you can say Nick Shell and you would get, you know, basically all the packages that are passed to building the wallet. And if you, if you, if you're going to modify anything that's not in the wallet, but is as a dependency of the wallet, then you will have to re-enter the Nick Shell because that's a dependency and you will have to rebuild it. But if you just work on the wallet, then, you know, and what's nice about this, you can switch back and forward between branches like stack would recompile the whole thing. Here, once you, once you, you've built a dependencies once, then you're on your machine so you can just go back and forward. And, yeah. Yeah, feature work. What we see in, what I see in DHK, what are we going to focus on? One is Nick's 1.12 is probably coming out soon. That's been going on for the last year. So very soon. It has a lot of really, really nice bug fixes and features. I won't go into that. There is a talk by Elko Dostra at Nick's Conf this year that covers what's coming. So better just watch his talk. And for deployments, we use currently NickSops which is kind of like NickSOS but with provisioning of the cloud services. The alternative is Terraform. Terraform is way better when it comes to provisioning because the community is huge and they have a lot of things you can provision. So, there are now different companies exploring how could we bridge those two. So we would use like NickSOS for the operating system but Terraform for provisioning part. And then there is this different ways to do it. Dal, JSON at Nick's, and HCL which is the Terraform language. At the end, it's all JSON. So yeah, you picked your precision. One of the things is the multiple outputs GAT which is like a patch that has been like ongoing together with guys from Tweek. We have been trying to get it in but it's kind of hard. I think it needs a couple of days more work. So what does it do is we talked about before where you create this out outer folder which is this whole Nick's hash thing. You can create multiple of those called lib, bin, blah, blah, blah. And so when you generate a runtime dependency tree you can like lift things out. For example, if you can compile the executable statically and then you don't pull in all of the Haskell libraries. So instead of downloading one gigabyte including the GAT you download just the 30 megabytes of the static executable which doesn't reference that. And there is a separate output for documentation and so on. So you can be more precise of what are the runtime dependencies. So for example, right now we have this minus static packages so they don't pull in and this would be kind of like now out of the box. All the binaries you would reference to they don't have this huge thing you have to download. So it's mostly about reducing the what you have to download. Which is really nice when you like build a Docker image and hand it to someone it's nice that it's like 50 megabytes instead of like a gigabyte and so on. There is the cross compilation. Ideally we would have nicks for all the platforms. So going from Linux to Windows compiling with nicks. There is a lot of work in the GAT right now to go to ARM and you know how do we deal with template Haskell and so on. One of the things that is also interesting is the CI there is the the Hydra what we use is completely pure. So it's like you know it gets inputs to the nicks it distributes that and builds and gets the build results back. But in CI there's always this impure part where you know we have to do some extra stuff interacting with the world. So there is there is a lack of a good tool right now. We use build kite right now but it's far from perfect. And then there's the NixOS tests which is basically kind of a driver how do you operate with a VM so you can say okay use this NixOS config build a VM and then wait for this service to listen on this port. Wait for this service to start you know block this port. See what happens and so on. So this is like really nice for reproducible tests and I think in RHK we actually Michael already wrote an initial version where we basically launch four nodes and they start the blockchain and you could then you know make some transactions and and and exit and get an output of logs and like analyze that and you know graph for example you know I don't know how long did it take to create blocks over the 10 minutes how many blocks were missed and so on so this could be all then automated. That it's the this is basically almost ready the part that was not working is actually that it uses VM so on Amazon if you do nested visualization it's basically emulation not virtualization and it's really slow so the blockchain can keep up and it starts to to fall over so we will have to use bare metal machines probably to to do this kind of things so we have to provision that yeah this is these are some of the references I would mainly recommend reading the nicks pills if we can just cover that so this goes into very much in details into well what I just described and there is you know it's a kind of a a long tutorial about the insides of the nicks packages it's pretty long and but you know at least then it explains you but they are bite size sorry but they are bite size you don't even need to do the appellate to protect them yeah they're yeah they're you know it's a before you go to sleep kind of thing