 Okay, my name is Bjorn, I will talk about Haskell and Pledge, OpenBSD Pledge. Yeah, here's a little overview. Yeah, briefly talk about Pledge, then talk a lot about Haskell, and then I will explain sort of the Haskell specific bit that I've implemented and sort of the thinking behind it, and then I will show a code example, and that will probably easily get us to 45 minutes. Okay, so it's eight past, so 50, I'll aim to end at 450 maybe or so, so people can ask questions, I mean not that many people here anyway, but all right, I hope this is legible, I can make it bigger, otherwise, yeah, all right, so let's go. So OpenBSD Pledge, if you've never seen it, it's part of the kernel API and the idea is to restrict the accessible kernel API that a process can use, and the idea behind that is, of course, if you're implementing a, especially if you're implementing a long running service that is exposed to the network, then I mean obviously you want to write it in a way that it's sort of can't be taken over, but of course we never know, and someone might be able to inject code and take over your program, so we want to lock down the things that they might possibly do as much as feasible, right, so, or to flip it around, duly, we want to throw away all the parts of the kernel API that we know we don't use, we want to sort of shut off access to that, because that can't be abused. Here's a little, I mean, silly example program where, so we see the pledge call, we see it has two parameters, two strings, we're not going to talk about the second one. The first one, it says standard IO, which is sort of is a group of, I mean stands for a group of kernel API endpoints, yeah, such as reading and writing from standard output, standard input, getting the current time, I think there's a bunch of things, I'll have a peek at the main page maybe later to fill in a bit more details there. Okay, anyway, so we call pledge with the standard IO promise, these these elements are called promises, and so after we call pledge, basically the only thing the kernel will let us do is interact with input output and a few other things, which we use of course here to print printf hello, then we do another pledge call, so they can be they can be repeated, and this time we just pass nothing, we pass no keywords, so we throw, this means that we're throwing away all access to the kernel except for exit, so exit can always be called. Yes, so but then in our program we have another printf and because we've actually sort of declared that we will not use anything else other than exit after this point, well, so this restriction is now enforced by just terminating our program, right, so if we were to run this, I mean I wrote this here in the comment, the this process would be terminated before the second printf, that's sort of the idea. There are around 20 promises, there is R path, there's W path, there's C path, D path, so these are for various operations on the file system, there's inet for opening network sockets and a bunch more that are sort of very specific and probably have you know, I mean they have uses but in sort of non-system level applications or sort of services, they'll probably not appear so often. Okay, so that's just a little overview of the idea, I mean there's probably people at this conference who are much more knowledgeable about this, so I'll leave it at that. Haskell, so that's sort of my, sorry, I know more about Haskell than OpenBSD or Operating Systems so I'll talk a little bit more extensively about that. Haskell is a programming language which you might have heard, it's, yeah, roughly 30 years old more or less, so the ideas go back to the 80s and 90s. Yeah, so there were various academic research groups or individuals that had ideas about functional languages and I mean it turned out that they were sort of converging and so they decided that they should actually, you know, agree on a standard sort of a base that they could work on and the result was Haskell. The defining feature of course is that this is functional, there's a little, there's a little bit of example code here just to give you an idea what this means, what it means to be a functional language in case you've never encountered this idea. So the first important thing that happens in functional languages, so let's start maybe at step zero, which is that everything is an expression and the one way to form expressions is to do this thing called lambda abstraction. So because the ASCII character set doesn't include the Greek letter lambda, we have to use a backslash, so that's what this backslash x is supposed to mean is supposed to be lambda x, right? And then we have an arrow and then we have on the right hand side of the arrow, we have another expression that has a free variable in it, which is like the free variable x in it. So we read this as binding or sort of abstracting over x, okay? And okay, so we will see in the next step, or second after next step, we will see what this means, what we can do with this. The second example, so in the first example, we have this idea of abstraction, okay? And the second example, we have the idea of application, okay? So we have a term on the left that is in parentheses, we have a term on the right that is in square brackets. The square brackets are special notation for lists. I mean, yeah, so that's sort of, you know, Haskellers like to talk about syntactic sugar. So that's sort of, you know, the surface language has some built-in things that make it nice to write, but they will ultimately be sort of unpacked into something more, you know, sort of uniform, you know. Anyway, so square brackets are notation for lists. On the left hand side, we have a term that is formed by binary composition, right? So the composition operator is the dot here. That's a valid operator name. And this is not a built-in, I mean, this is just defined in a module somewhere. And it just stands for function composition, right? So revert, as you can easily imagine, is a function that reverts lists. And length, I mean, as the name says, it just computes the length of a list, right? So and composition happens from sort of right to left, right? So the revert function, so the idea is that the revert function happens first and then the length happens after. So you can read this as length after revert, right? Okay, so I mean, that's just one way to pronounce it. The point here is that the dot, the operator, that's just a particular way to write function application, right? So an infix operator, it could also be written as a prefix application, as a sort of application of the dot to two parameters, right? And in this case, the two parameters are length and revert. So what's happening here is that we're applying a function to two parameters, which are also functions, right? More importantly, they're expressions. And so that's sort of the core of functional programming is that we abstract and hence form functions. And then we apply functions to other, to expressions, but can also be functions. And it's extremely useful, as we can see in this case, because we can just define an operator that composes functions, right? I mean, okay, and then obviously this parenthesized term is applied to the term on the right. The idea is obviously that we revert the list and compute the length. I mean, not a smart thing to do, but okay, I'm just sort of trying to show, you know, simple examples. Obviously, this could be written differently. This could be written without the composition operator, right? But hey, okay, so we're just getting started. The next really crucial bit is the, or well, I mean, for us, it's not so important for this, for this discussion, but one of the sort of core features of Haskell and sort of the main impetus behind this development was the idea of lazy evaluation, hence the sloth, right? You can see the sloth. So the Haskell really sort of crystallized around the idea of lazy evaluation and a particular kind of type system, right? These are the core ideas. Lazy evaluation just illustrated here. So on the left-hand side, we have this parenthesized term, which is a lambda abstraction. Just like in the first example, on the right-hand side, we have another parenthesized term, which is, well, I mean, it's the application of this binary operator to two constants. Now, you know, so if you were given this thing and, you know, you were asked to say what this is or sort of what it, you know, what the result is, what it, you know, what it computes, there are sort of two ways you could go about this, right? You could first, you know, calculate two plus two, which is four, which is a new constant, then you unwind this application of, you know, the parenthesized term on the left, the lambda abstraction to this new constant, four. And obviously, yeah, the point of application is, of course, that the parameter gets put into, in the place of the variable, right? So in this case, four would be put into place of the X in the expression that we have abstracted over. So we would end up with four times four, which again, we can evaluate. Now that's sort of, that's what's called a strict evaluation order. Oh, sorry. It is, yes, that is strict evaluation. What Haskell does is sort of is different. It will actually happily substitute the two plus two into the X, right? So, but then we get a term that says two plus two multiplied two plus two with the parentheses that I'm not pronouncing. And then we can go from there, right? Because, okay, so we, there's sort of more primitive operations that execute the arithmetic on the, on the CPU. Now that seems wasteful, right? But there's another aspect, which is called sharing. So in fact, we're not sort of syntactically substituting, but we're actually just forwarding a reference, right? So the, the two plus two actually only exists once in memory. Okay. So when we have this expression two plus two times two plus two, then when we go about evaluating this, we evaluate two plus two, but that's actually a reference to the term itself. So then when we evaluate that, we have actually already evaluated the second occurrence of two plus two as well, right? So this is called lazy evaluation with, with sharing, right? We'll call by need is sort of the technical term for this. Now this gets, you know, you might wonder why. So the basic idea is that by being lazy, we can, we can sometimes save work, right? So it might turn out that so on the left hand side, we might have some much more complicated function. And in some cases, you know, depending on some other parameters, we may not actually need this value that we're passing in, right? So we may never need to evaluate the, the, this expression two plus two, right? So we could just forget about it or let's say, well, it will eventually be forgotten, right? Anyway, the point is that laziness can sometimes save work. And it also makes it much nicer actually to, in a lot of situations, makes it nicer to think about code refactorings. Anyway, that's not really the focus here, but just to sort of provide some context. The Haskell is also pure. It's a bit difficult to sort of give a really precise definition. But the idea is that we, we try, well, first of all, we avoid mutation at, we're not at all costs, but mostly we do not want mutation. Also, we want functions that always evaluate. So given the same arguments always evaluate to the same result. Meaning, well, and the important, the important reason why we want that in a lazily evaluated language is that it doesn't matter when we evaluate, for example, because a function has no side effects. And we can also freely share, right? We don't need to worry that sort of side effects happen, you know, some unexpected number of times or in some, you know, at some unexpected point in time, because they, they are sort of excluded. So all these functions that have, all these examples that have given, they are pure expressions, right? So evaluating them just has no side effect. I mean, for, you know, a reasonable definition of side effect. Anyway, and all this is pretty tightly intertwined actually with, with the type system, right? I'm not going to go into too much detail on that now, or actually none at all. And we will see in later examples, I will explain to the necessary extent. Okay. Okay, so that was Haskell the language. At the time there were a bunch of projects that were sort of implementing. So these people that I was talking about that sort of came, got together and you know, decided that they should collaborate. They actually wrote a language standard. It's not a formal specification, but it's, you know, it defines standard functions that should exist, what types it makes sense they should have, and a bunch of other things. And there were several different implementations of this, of these ideas of this, this language standard. But so nowadays, GHC, which is the Glasgow Haskell compiler, exists in OpenBSD ports. The latest version, I believe is 928. It's basically the only game in town. And it's sort of, it's really a sort of industrial strengths, sort of product, right? I mean, it's, I use it in my, in my work and, yeah, so our whole company basically relies, our entire web service backend is written in this and it's a, you know, it's a legal tech context. So we, you know, we have to have pretty high confidence in this stuff. I mean, it's not, you know, obviously not a hugely popular language, but I mean, it does have serious users, committed users. All right. So GHC in particular, yeah, like I mentioned, is a de facto standard compiler. It generates native code for various common platforms. It actually can do this in different ways. It can do it via, via C. It can do it via LLVM. It also has its own native, native code generation backends. It can, there's this sort of pre-release work for WebAssembly and also JavaScript targets, actually. It's been a long time coming, but I think it's already sort of in an alpha, alpha stage is already available. GHC also comes with a REPL, which is sometimes extremely handy. And also it comes with its own runtime system, right? Because there's a lot of things that you need to do in the background to actually make all this stuff work. One of which is this M2N multithreading. So that's pretty nice. The runtime system also provides various synchronization primitives for, for threads, among which is a software transactional memory, which is, you know, if you want to be very tidy about thread interactions, that's a, that's a very nice thing to have. And yeah. So lazy evaluation basically necessitates a garbage collector because we really want to, well, first of all, we don't, we want to abstract away from, you know, the, when our expressions are evaluated, how often they're evaluated, and, and evaluating the expression. I mean, it changes the memory representation, right? So we, we abstract all of that away. So we definitely need a garbage collector. Yeah. The, actually employs several strategies. There's also preliminary work on a multithreaded one. Anyway, so it's, it's pretty, it's pretty sophisticated and it's very, very active development on that front as well. Okay. So then around the compiler, we have Hackage, which is sort of the, basically the only relevant package repository. There's a thing called Google. Yeah. I mean, okay. So some, the naming is a bit humorous at times, which is a search engine basically, it indexes the entire Hackage repository. And there's a build tool Cabal, which also is a package manager and obviously draw, I mean, the main source for packages is obviously the, the package repo Hackage, which I already mentioned. There's also an implementation of the language server protocol, which I believe is a Microsoft invention originally for VS code. But, but yeah, I mean, many, many editors now support it, including Emacs and them, I guess, I don't know what else people use. So anyway, there's a, there's an implementation called the Haskell language server, which, yeah, I mean, it's a sophisticated wrapper around the compiler. Right. So the point is that it reuses the compiler, the GHC itself as it's sort of core. Okay. Okay. So let's do some Hello world. So this is what very simple, I mean, it's not the simplest, but sort of moderately simple Haskell program would look like compilation units are called modules. I mean, they're Congress with files. The compiler will enforce the concordance of names. So, for example, this, this module called main, it has to be in a file called main.HS. We import something from another module. There's a hierarchical namespace for, for modules. We can, we can selectively import. So we're importing here a function called append file from this module, obviously. We define a function main. And it does a bunch of things. The thing to note here is that we say main and then we give its type and sort of the, the notation for that is just double colon. On the right hand side, we have the type and it says IO of, yeah, nothing, I guess, or I don't actually know what the standard pronunciation for this, I mean, it's supposed to be, it's supposed to denote an empty tuple, I guess. So, okay, anyway, it's, it's IO applied to empty tuple, right? That's the type of the main function. And I'll, I'll say a little bit more later about what this means. So we have the type declaration and then we define the term, right? So main is this. We have a do block. So do introduces a sequence of actions, right? So this is a way of writing imperative code in Haskell. And there's a function called put string line, obviously, which does the usual, whatever, what do you expect? We fetch some input, bind it to a name, then we append this, you know, the contents of that to a named file, names.text. And, okay, so we apply a function to this name, which defined up here with this type signature, which is, I mean, tells us that it's obviously a function from string to string. We print the result. Okay. Yeah, I mean, it's, I hope it's not too jarring. I mean, if you're not familiar with this, I think this should be sort of reasonably understandable. Yeah, so think notice that Haskell relies a lot on indentation. So you could use curly braces to limit, for example, this do block. It's usually not done. We could leave off the type definition, the type declaration, because, and sort of in early Haskell, that there was a lot of code that people wrote basically showing off, Hey, we can do type inference. So we don't actually need to write the types, the compiler will infer them for us and check them, right? Obviously. But now, I mean, it's, I mean, in, yeah, in series applications, it's frowned upon, obviously, to, to not have top level definitions with declared types. And the compiler will warn about it nowadays, which you can turn off. But anyway, all right, that's suffice to just get to know the language a little bit. Now, obviously, we want to talk about pledge, right? So what, what do we, how do we sort of enhance this program? So let's, you know, so you should analyze this somehow. If you look at the man page, so if we were to look at the man page, and think about sort of the implementation of these functions, the put string line, the append file, the get line, I mean, they ultimately call, you know, stuff in libc, you know, open at the put string, and so on, right? Yeah. So if we, you know, analyze this carefully enough, we come up with, with these promises that are needed, right? So put string line writes to standard out. And the promise that allows that is called standard IO. A pen file needs to open and possibly create a file in the file system that requires the the W path promise. It also needs to write to that file that requires a standard IO promise. Get line reads from standard inputs. And you can read the rest. Okay. That's sort of straightforward enough. Okay. So now, how do we do this to Haskell way? Oh, that's not a comment. That's actually, okay. Right. So, so now we're getting to the stuff that we're implemented, that I implemented to, to actually call pledge from Haskell in a, you know, smart way. The low level bit is this function pledge, which goes from set of promise to IO of nothing. I mean, it has underneath the FFI binding. We do some marshaling there where we serialize some constants. We print out the, these text constants and concatenate them. So the way, yeah, so, I mean, it's, I imagine I have, I'm not totally certain about this, but I imagine that this was for, for user, for usability that the OpenBSD Pledge API, it just uses a space separated string of these keywords, right? I mean, it's incredibly easy to write to, to use, right? If you were, if you're writing a C program and you need to apply this, right? Okay. So yeah, so this pledge function here, it does that sort of under the hood. Now, because we work in Haskell, you know, we don't want to, I mean, we could use strings, but we want to be more precise, right? And the way to do that is to use a data type, which is, so when Haskell data introduces an algebraic data type, which we're defining here, a data type is called promise, and it has these constructors. And I counted them once, I think there are about 20 of them. Now, this is essentially an enum, right? During runtime, these will be represented by machine words, by one machine word each. Okay. And then, well, so this deriving means we're basically asking the compiler, please generate some boilerplate stuff for us. So show is a way, so the compiler will, will define a canonical way to print these things by just converting the, the, the names into strings, right? So it gives us a way to print them, print them. Act gives us a way to compare them. Enum allows us to enumerate them. Orc does ordering comparison. The order is obviously just the one that is enumerated here. I mean, this is, so this is, stuff is sometimes useful and sort of a habit to, to define them for most types. And it's free. So, I mean, free in the sense that there's only one light of code. And yeah. Okay. So there's the pledge man page, obviously that explains what all these things actually mean, right? All right. Okay. So now we take our example program and we, we're going to sort of enhance it with pledge. And, oh, remember the type signature here told us that we need to pass in a set of promises, right? I could have used lists, but set is a bit, a little bit nicer because this will make sure that when we process it, the keys are all unique and sorted. Oh yeah, by the way, that requires the, this thing, right? The odd, this ordering sort of canonical ordering. Okay. Yeah. So these are the, the promises that we thought about that, that we figured out we need to use. And okay. So we insert a pledge call, but, you know, before everything, after everything, and then in between all the sort of application actions. Why not, right? I mean, it's the sort of, I mean, it's potentially wasteful, but, you know, to keep it simple. And in the end, we want to end with no promises. So we create a set from an empty list. So that gives us an empty set. And then we'll run pledge on an empty set. So that will ultimately call this pledge function, the C function that I showed in the beginning with, with an empty string as the first parameter. And that will discard all, all promises and leave us only with the exit call. So then the question is, okay, put string line requires the standard.io promise. So, okay, so we should probably put that here, right? So where this underscore is, which by the way is valid Haskell notation. In this case, the compiler would tell us there's a, there's a, there's a whole here. There's a technical term as a whole, which you need to fill and it needs to have this type, which is extremely handy, especially when you're using Haskell language server, because then you actually, your editor would tell you, when you move the cursor there, it'll tell you how to fill this whole. And, okay, so we should, we need to put the standard IO here. Now a pen file needs both of these. So obviously we would put, you know, from list, both of these elements here. When we go one line up further. So now this gets a little bit, so this is where it sort of becomes non trivial because get line requires standard IO. But later in the program, we already know that we need W path. So in this pledge call, we also need to include the W path, right? And so on. And then the same for the, for the first call. So like this. Yeah, I think that's pretty clear. Okay, now for something completely different. What is actually happening here? Right? So this is a do block. Haskell is supposed to be a functional language. Why do we have imperative code? How's that? I mean, isn't that strange? Well, the solution is that underneath it is actually functional. I wrote here in comments, I wrote the type signatures, right? So get line is a is an action with this type being IO of string. And that's supposed to tell us that. Well, it's an action that yields a string, right? That's, that's what this means. Put string line is obviously a function that consumes a string and returns an action, IO of nothing. So that's an action that has a side effect or can have a side effect, but doesn't yield any value, right? So technically it yields a value, but it's a unique value that doesn't carry in it for any information. That's the idea. But so people say that this notation, this do block, do and then line after line of imperative code is sugar, right? Meaning it's sort of icing on top of, you know, the language. And GHC or I mean any Haskell compiler will actually turn this into this expression that we find down here, which is an application of a binary operator to a to an action, to this action, get line action. And as a second parameter, this lambda abstraction. So it will literally do this expansion. Now, obviously this can be reduced, right? So if we wanted to optimize this, we would eliminate this variable, right? Because we have an application of a variable and then abstracting over the same variable, sort of one level up syntactically, we just eliminate the abstraction and the application, right? That's called the alpha rule. Sorry, not the alpha rule. It's called the eta rule. Okay, but regardless, the thing here could be something more complicated, right? Where we can't just eliminate it. So this expression where sort of the second line of the imperative program here could be some more complicated expression. So hence this syntactic expansion with the lambda. Okay, so point being imperative code gets converted into functional code, right? Now, this is allowed because, or let's say, this requires that the types match, right? What do we have? Yes, okay, so we have this application of this operator, which is by the way, it's called bind. And I suppose these arrows in the equal sign, I suppose to suggest, you know, stuff coming out of the first expression and being sort of fast forwarded into the second expression. I think that's the idea visually. But it's a binary operator, right? So that's really nothing more than a binary function. And we haven't actually seen functions with this kind of arity. But this is how it's written. Note that arrows associate to the right, right? So the first parameter is IO of A. The second parameter is a function from A to IO of B. And then the result after that is IO of B. Now I said arrows associate to the right, right? So that actually means that, that actually means that all functions are unary, right? Which sounds weird and limiting, but it's actually not because, of course, the return type can be a function type, right? Which then we can apply more arguments to. So this is perfectly fine. And it's actually incredibly handy. Yeah, so that's what's going on here. Right. So this is the bind operator. And so there is a particular implementation of this bind operator for this type IO. Note that we have, I hadn't actually mentioned this before. This is a really important aspect actually of the type system is something called polymorphism. And in particular parametric polymorphism, right? So these lowercase letters here in the type signature, they are variables, which means that we can implement this. So there is an implementation of this operator where we know nothing about the types A and B, right? But the type signature prescribes that whatever we return here in the end has to be of the type IO of B, right? And this also has to match, right? So the first expression that goes into the bind operator has to be, if it is IO of A, then the first parameter of the second argument has to be of the same type, right? So that's very neat. And I mean, this keeps things very tidy. Okay. Now, that's not enough though, right? So I've talked about imperative programs and how they are actually sequences of binds, right? Underneath. So they're, it's functional. Now we want to, right. So I didn't, I didn't, I didn't show, okay. So, so if we look at this program, obviously there are more lines. There's four lines, not just two. And they will be expanded in this pattern from top to bottom. So this will be sort of left associated with this bind operator, right? And now sort of the thing that we really want to do is we want to use the Haskell type system to compute stuff for us. So to compute the promises for us that are required in this imperative sequence. And we do that by wrapping our side effecting actions in this, yeah, slightly scary looking gadget, which is called a new type. And that's basically a way of attaching more information, more type information to an existing, to a value of an existing type. And the important bit is that they will have the same runtime representation, right? So this doesn't create any, any new pointers, for example, right? And it doesn't require any, any, any extra allocation. But it will allow us to attach information here that the, that the type, that the compiler that during type inference will, will work with, right? Okay. So we'll have to say a few things about what these things are that are listed here. So, okay, so we're defining a new type. Okay, that's the keywords with, and the new type is called pledge. It has three parameters. The first parameter is Z, which is of this kind. It has another parameter called P's, which is of this kind. And then it has a third parameter, which is called A, which is an ordinary type. Okay. Which then is relevant for the stuff that's actually inside that we're labeling, right? Okay. So, sorry, let me, let me explain what this awkward stuff is that I was just talking about. So there's something called data, data promotion. And so GHC implements an extension to the language, to the Haskell language standard called data kinds. And that means that every type constructor now also stands for a type. Okay. So I've listed the, so the relevant examples that, that, that are pertinent here in our example, in, in our situation. So I showed you earlier the definition of the promise data type, right? So in particular, it has this, this term, right? There's this term called standard IO is a constructor of the promise type. And so that's what this first line means, right? So double colon means the thing on the left has the type on the right. Now the second line means promise. Oh, okay. Promise is a type, right? It's on the left. On the right hand side is a star. Okay. So this means that promise is a type of kind star, which is sort of the universe of all types that contain values, right? And this is important because there are more types that don't contain values, right? They cannot be sort of instantiated, if you will. And, and one of them is the next one, there's one called apostrophe standard IO, which is a type now, which is of kind promise, right? So I mean, it's barely visible sort of this shift, right? So a value becomes a type, this constructor becomes a type, and the type becomes a kind. And so the only sort of syntactic way of, you know, sort of marking this distinction is putting this apostrophe in front. It's not actually required in most cases. But GHC now warns about when it's missing, because I suppose it's a bit hard to follow if it's not there. Anyway, okay. So speeding up a little bit. We have, so this is relevant to this explanation, this definition of the pledge type is parameterized over stuff in promise in square brackets. This just means, of course, a list of promises. So the elements are lists written this way. They also get promoted to the to the, to the kind level, right? So there's a kind where the, where the types are lists, right? And they're written like this. It's a little bit awkward because now the fact that we're constructing a list also has to have this apostrophe in front. And then we have another apostrophe right after, and then they can't be next to each other. So there has to be a space. Okay, it looks a little bit ugly. Anyway, okay. So now how do we use this? The base system, the sort of the standard language standard actually defines this function called directory contents. For example, that just gives us a list of of the file names and the directory, right? Now we want to use that with with Haskell pledge. Now we have to say we have to label it right after we have to use our type system to label it with the promises that are needed. And we so reading the directory contents requires the our path promise. So hence we just write this type signature, right? So we pass from, you know, this type signature where we take a file path and spit out IO of a list of file paths to one where we take a file path and then we spit out pledge of ZS indexed over the our path promise, re-yielding a list of file paths. Actually, so that's the type declaration underneath. It's very easy to define. We just apply the pledge constructor. The constructor is this, right? So they look the same, right? The type, the name of the type and the constructor. That's very common to use the same name if there's only one. The constructor behaves like a function, of course. So it in particular can be composed. So the definition is very easy. And then we have to define a new bind operator, right? That can process all this information. And the type signature is shown here. It's a little bit complicated. But essentially what this means is that so we put in a thing, we put in an action that requires queues, right? And then we feed that into an action with a parameter that requires peas. And the result will be that the composite action requires peas and queues, okay? Note that the underlying value parameters have to agree, right? So the A here of the first value has to agree with the parameter of the second argument of the bind operator. And then the result type here is the result type of the composite. Okay. And I won't have time to explain in detail why this other parameter here is required. I can show a tiny bit of code. And so this is a slightly more elaborate example. You can see we have an action that requires the standard IO promise. It's defined this way. We define a main function that now has to have, okay, let me, okay, we talk about the do block first. So we also have, we have a do block here. And note there's an M in front of the do. And that means that's a way of telling GHC to use this new bind operator for sequencing these actions, right? So we have this, it looks slightly different graphically. But anyway, we have this sequence of actions. We get a line binded to a name. Then we read the contents of the root directory. We process the, we process the file names that are returned in some way. And then we hand it off to another do block. And okay. So, and the logic of this, the logic of the sequencing makes it so that we must put these promises here at the top, right? And that's really the important bit is that the, so once we've done, once we've done the sort of the legwork of labeling, you know, the small parts, so that the constituent actions, once we've labeled them with the promises that they need, we are forced to label all the composite actions accordingly, right? Or else the compiler will refuse to type check this. It's that simple. We can also use wildcards to actually let the compiler tell us what we should put in this spot, right? So if we remove this and try to compile it, it would, it would give us a type error and which would, well, it might be a bit complicated, but it will essentially tell us what has to, what has to go there. And that's based on the, based on sort of synthesizing this list according to the recipe that's defined by this new bind operator. Okay. That's that. Yeah, there's some caveats. I mentioned in the beginning that the GHC runtime has really good multi-threading support. It's quite sophisticated. This model will, this implementation will not work. The reason is that pledge applies to a process, right? And now if you have different threads in a process and they try to, you know, they can, so one thread could pledge, you know, a very small set of promises and another thread could try to do some action that actually requires some promise that was already relinquished, then of course you have a problem and your program will be terminated. So much more work is required to make this thread safe. We're not dealing with exact promises. That's the second parameter of the pledge call of this, of the C function that I said I wouldn't say anything about. Some more work is required to make sure that we're not calling pledge sort of unnecessarily. So when we know, and in principle we know during compile time, when we're not actually relinquishing any, any extra promises, then we should be able to eliminate this pledge call. Also, this is obviously completely non-portable because we're using an open VSD specific kernel API, but there is actually a very cool project. So someone actually wrote basically an adaptation adapter layer for that can translate pledge promises to seccomp programs. If that means anything to anyone. And that would work on Linux. I mean, there are probably lots of subtleties, but there's great potential to actually make this work on Linux as well. Okay, so here's a, well, okay, you can't see the link. There's a Haskell.org webpage if you want to know anything about Haskell. There's one paper, oops, okay, this was too stupid. There's one paper that I read that sort of is really foundational for, for, for this sort of calculus, right? And that's, it's called, it's called the core calculus of dependency. I don't remember. It's actually quite old. It's probably eighties or so. Anyway, so they provide a form of lambda calculus that can sort of track information at the type level. And it has many of these, it has many relevant ideas there. Okay, that's it. Sorry, I might have gone slightly over time. Anyway, the QR code is my webpage and there's a link to the project repo if you're interested in finding it. Thank you. Questions? Yes. This is a, I mean, I have run into this, the GHC runtime system. It basically must have standard IO. So I have to fudge it a little bit. So basically you always have standard IO. That's the only one I've run into so far. The runtime system will not act sec without you explicitly asking for it or fork. It will, it will spawn threads. No, not even that. Okay, so you can lock it down. You can actually, well, obviously you can ask the runtime system to run single thread, right? So then you'll save from that. Yeah, so there is potential for the runtime system to create problems here. And there could be others. I mean, there could be other snacks, of course. Yeah. Okay, thanks.