 Okay, I want to start just by saying a thank you, because Jim Fries, I don't know how many conferences have been in that you've organized a lot. He is tireless. And taking a risk and organizing, we have one more, a brand new one with a new group of people and a whole series of challenges. I'm just really impressed. And as you'll find out, he runs a great conference. He has a lot of detail that is really, really nice. So I just want to just, before we start, I was going to say thank you to Jim for organizing this. So this is Steve Jobs. He actually fought to have it think different. I can't stand that. It's got to be think different. Please. Um... Just leave the publisher and he gets all. Okay, so I think that if you don't keep renewing your thinking, well, there's a story in business that if you're not growing your dime, and I think it's the same mentally. I think if you're not growing mentally, if you're not thinking new things, if you're not challenging yourself, then basically you're stagnating your dime in your head. So I think it's kind of a duty to keep challenging ourselves, to keep trying to find new ways to do things. In my own career, I've been through a number of transitions. I mean, I started off, my first programming line was basic. And I really, really enjoyed that. I went to some early OO languages. I did all sorts of stuff. I keep changing paradigms as it goes along. But at the same time, I have been doing OO development of various kinds. Okay, how many people here are younger than 40 years old? I hate every single person that has their hands on them. Because I've been doing OO development before you were born. And I got into a bit of a rut, because it was like easy to do. After a while, you learn it, and it becomes instinctual. It becomes tested, and that's how you code. And one of the things that I really love about this last 18 months has been that Elixir has changed the way I think about programming. Not just in terms of like, oh, it's a functional language. It's actually changed my conception of what it means to program. Now, the problem is, I would love to explain that to you. I'm going to try to explain that to you. But it's not easy, because it's kind of like how I think about stuff, and Lord knows how I think about stuff is weird. So I'm going to do my best to explain it to you. However, what I'm not trying to do is to sell you on my way of thinking. That's not what I'm doing at all. Instead, what I want to do is just to show you an example of a way in which the language and the environment have a profound difference on how you think about what you do. All right? So wish me luck. At that point, we're all supposed to say, good luck, Dave. Good luck, Dave. All right, so quite a bit of background. How many people here have done coding in Elixir? All right, less than a majority, good. So I'm not going to sit here and give like an introduction, basic lessons in Elixir. I just want to give a few things just to sort of cement the ideas. First idea is that we have pattern matching. And pattern matching is pervasive in Elixir. So oh, god, that blue is not sure enough to sit. All right, let's already say a equals sine 1. You're not assigning a to 1. You're challenging Elixir to try to find a way to make that true. And it can make it true by associating a value 1 with a. If I have a 2-pole, cd, and I try to match that against 2, 3, then obviously you can do that by associating 2 with c and 3 with d. And so it goes. There's a few less common forms of pattern matching. So for example, we can have the binary join there, Elixir join to rest, matches Elixir rocks. Well, the only way to make that true is if you set rest to be the row rocks. So it's quite subtle to do pattern matching if you do it. And of course, the pattern matching for lists allows you to match either a literary or a list of the fixed number of elements, like I do in the study example there, or you can match a variable length list by matching the head and then the tail. So the head is a single element, and the tail is then the rest of the list. So pattern matching applies there. It also applies here. So for example, I can say go and open a file, and that's going to return a tuple that says either OK, and here's the file, or error, and here's what we're wrong. So I can do pattern matching on that using a case statement. Nothing really exciting there. But that case statement does all the same pattern matching that I could do previously. And we have pattern matching in the functions too. We're all familiar with the pattern matching in name functions, but you also do it in anonymous functions. Anonymous function can have multiple heads inside the function body, and each head has a pattern match. So here, if the first parameter is the acting plus, then that first one's going to match. Otherwise, at this time, the second one's going to match. And then it appears in a common form with negative functions. All right, so let's just review. So pattern matching lets us match based on shape and content. It allows us to destructure data. That is, look inside composite data or structured data and extract and match as individual elements. And loosely, you can say it's recursive, in that you can actually match things inside a match that's going on. So this is a very, very powerful facility. And I'm about to draw a parallel between it and something else that we're all familiar with. So let's just, like, beat the horse to death first. I think, actually, almost every time I've stood up in this room, I've had Fibonacci on the slide somewhere, so to speak. So Fibonacci is a horial chestnut of pattern matching among other things. The cool thing about Fibonacci is, OK, here's the specification, the mathematical specification of Fibonacci, right? Fib 0 is 0, Fib 1 is 1. Otherwise, it's n minus 1 and plus n minus 2. OK, so that's not a big deal. And that very conveniently maps into elixicode. So here we have our magical Fib that has a function called Fib. There are three patterns or three heads to that function. If you pass it as 0, then it returns 0. That's the 1. It returns 1. Otherwise, it returns whatever. And so if you call, I don't know, Fib 10, I guess that's 55. OK, so I don't think dramatic there. So there are many ways of expressing that in elixicode. You could express that like this. You can have one function ahead rather than multiple ones. And then inside that, you could use a case table. Again, we're pattern matching. But we just have now a control structure inside our function. In reality, in terms of executing code, my understanding is that those two are identical. And inside the virtual machine, those two are actually the same code, i.e. it's mapped from one to the other. But it's very different at the source level. You can also choose to write it like that, which might be how a ruby-bob or what a ruby-bob. So if you look at those three, which of them best captures the idea of Fibonacci? The first one, thank you. I think the first one. The rest of them are perfectly valid. The rest of them work perfectly well. But to me, when I write that first one, it mirrors the specification. It mirrors what I've been told to do. And I think that's important from two sides. It's important from the developer's side in that I have a clear and transparent path from what I want to do to expressing what I want to do. But I think it's also clear and probably more importantly clear from some future reader's point of view. Because if they read that, it's going to look like something they can understand, something that they can tie into in this case the Fibonacci specification. So I think that this way of expressing things is expressive. It's clear, it's concise, and it says what we want to do. So I have been trying to work really hard over the last, I don't know, six months, to write programs as if they were specifications. And that has made a dramatic difference in a way I've heard. Now, let me just give you a few more examples. And then I'll get over to the beginning a bit deeper. Specification length of a list. OK, number of entries in a list. Well, an empty list has zero entries. And the other list has one for the front of the list and then the length of the rest of the list. I'll say you calculate the length of the list, but you'll be excited to know. And of course, the implementation of that, the recursive implementation, just follows that spec, right? Literally line for line. So the length of an empty list is zero, but the length of many other lists is one plus the length of the tail. Absolutely nothing surprising there. Man, our favorite map function, one that applies a function to every other in the list. So the map of an empty list is the empty list. The map of a list of the head and the tail is you apply the function to the head and then you append the map of the tail. Again, nothing particularly scary there. And the implementation, you know, trivial, if anyone could write that. OK, so these are going to like trot it out and you'll see people, I'm probably guilty myself, doing blog posts and tutorials that have these kind of examples in them. It's like, oh, OK, so what? So let's try doing something just a little bit more complicated. Let's say we have a list of values. In this case, that number's going to be empty. And we want to run length encoded. So that we're going to look for sequences of the same value and replace those sequences with some kind of remission. So here, if I have three twos in a row, I'm going to replace that with a tuple with two, that's the value, and then three, which is the repeat count. All right, this is not rocket science, but it's a bit tricky. Think about sitting down and coding that in your regular, like a movie or a job or a seashell, whatever you might be doing. And think of the things that you would have to think about. You'd have to think about all these special cases. In particular, there's the end of the list problem. You're going to be able to deal with that special case that you hit when you hit the end of the list. And as an experiment, I sat down and tried coding this up in Ruby, and sure enough, I actually had to bug you the first limitation I had to go back and fix it. All right. So this is where I want to get into the start of that kind of revelation I had. So my input is a list, and there are two special cases in that list. The first special case is if the two, first the head and the second element of that list are the same. And in terms of a pattern match, I represent that by just saying A and A. The same quote variable there means it's going to be the same value. So that will only match a list that has two identical elements to start. And what I want to do is to replace those two with a tuple that has the value of A, whatever that might be, and a tuple. The other special case I've got is if I've only got the tuple at the start of my list, and the next element is the same value as the value I've got in that tuple, then I'm going to replace those two elements with the same value, but M plus one. Yeah? So you can see that operating here is our list. One, nothing special, so it gets copied, but now look, we've got two twos at the start of the list. So we're going to replace them with a tuple, but now we have a tuple whose value is two, and the next number is also two. So now we're going to replace that with two, three. Now nothing special, there's no matches, so that just gets copied down. Three gets copied down. Oh, we've got two of those guys now, so we can form a pair out of those two, et cetera, et cetera. Okay, so that's how it's going to work. So how do we code that? Well, whenever we need to generate a new output from an input, we're going to need some kind of value to put that into, all right? We don't have state that we can just like stick into an instance variable, so the common pattern is we're going to have a helper function, so to encode our list, we're going to call underscore encode, that's our helper function, and we're passing it the output list. So the idea is that we're going to transform the input list into that output list. That's what happens when we finish, when the input list is empty, we need to just basically return the output list, and as is often the case, whenever you're recursing on the head, you're going to have to reverse the output list. This is where it gets interesting. Here's our pattern match to say if the input list has two identical elements at its head, and then any kind of arbitrary tail, then we're going to call ourselves again replacing those two elements with the tuple a comma two. All right? So we're not generating anything in the output yet, we're just moving stuff in the input list. If we have an input list that contains a tuple, which is a repetition of the value a, and the element following that is also that a, then we replace those two in the input list with a n plus one. And then the last case is, okay, anything else we just copy to the output list. That's all we have to do. That just worked. That's just an implementation effectively from the specification. And it just worked for us time. There's an interesting little side effect here. How do you know that this particular program will always terminate? It's actually pretty easy. Because if you look, here are our three significant patterns. So I get the elements, tuple and an element, or just some random element. In all three cases of recursion, we remove that pattern from the list and replace it with something shorter. So in all three cases, our input list is going to shrink in size, so we guarantee you we're going to terminate. There's all sorts of cruel side effects when you start looking at doing it this way. Let me show you another example. This is actually where I got all of this inspiration from, if you like. Because when I start learning a new language, I have little exercises I do, little things that program. Obviously the first one is always a whole little world. And then I'll get into different stuff. So I'll code up binary chops and this kind of stuff. And I really, really, really want to get into a language. My current, I wasn't going to call it my current favorite way of doing it, but it's anything but favorite, is to implement a markdown puzzle. And the reason I do that is that markdown is the ugliest, worst specified, full of special case thing you can think of to try to pass. All right. I mean to give you a clue to not a full markdown implementation is basically the regular expression from hell. So it's really tough to find ways tidally to break down markdown. And so as a result, it's a great way of challenging yourself when you're learning a language to try and find nice ways of expressing things. And this is what blew me away when I was coding millions. Now, this is probably my fourth elixir markdown implementation. It's the first one I actually finished because it's the first one I actually felt, yeah, that's just okay. And it's actually the least on hex as earmark. But anyway, let's imagine you're pausing markdown. So one of the many, many markdown rules, use that word like that, is that a full of heading will be a blank line, some text, a line of underscores, and then another blank line, okay? So that's a particular style of markdown heading. Let's assume that the first thing that you've done is taken your input and divided it into lines and done some normalization, like strip and blanks maybe, or maybe not depending on the stuff like that. So we have an input list, which is a list of lines. Well, here's a rule that recognizes a markdown header. And this is actually not quite like this because I have some stuff in there, but fundamentally that's what I've got in my markdown box. So to parse a list whose head consists of a blank line, a title, a line that starts with underscores on another blank line where the title has a length. Yeah, so then you replace that in the input with, oh sorry, you actually generate that into the output, by heading that has a title. So what have I just done here? I've used pattern matching to parse some syntax, extract out the information I want and then generate an output set, pretty cool. That's pretty cool. Another example, just uglier, but I mean just to show you stripping HTML comments. This is a little two state state machine written purely with pattern matching. So we're switching back and forth between being an outside comment and inside a comment and recognizing our comment start and stop just using a pattern matching. Again, really, really cool. All right, enough of those. The point of this first part is that you can think of pattern matching in every single context you use it as a form of parsing. Now sometimes it's trivial, like A is matched with one, well okay that's a trivial parse, it's just saying I have a single element. But sometimes it's way more complicated. Sometimes it could be a mockdown syntax or it could be error codes returned by file.over. It could be anything, but pattern matching can be considered to be a kind of parse. And our code that uses it is consuming stuff during that parse, during that pattern matching, it's consuming it and it can choose to generate outputs from it or it can choose to modify its future behavior by updating its inputs instead. That's what we did when we were doing our run length encoding. We were pushing back onto our input list a tuple if we found two elements the same at the beginning. So we can consume input, we can modify input and we can generate output. And because we can look into lists or other parameters or whatever, we have a degree of looking. I'm not a parser expert, but I'm thinking this is probably something like LLN parsing. Now why is that a big deal? We'll tell you later. So we're gonna come back to that. Second thing I want to talk about is functions. Now we all know that a function is something that transforms data. So the plus function takes two things and adds them together and the sign function takes something and does a sign on it, et cetera, et cetera. Wow, okay, that's not directly. But obviously there are bigger functions, more complex functions as well. So for example, if we were writing some kind of web servering kind of thing and we get sessions data come in, we're gonna want to go through some lookup function to find out who our user is or whatever else might be the case. Or if we have like a shopping cart and some payment information, then we might want to take both of those things through a function called checkout to generate an order. That's not normally how we think about functions. Or at least it wasn't normally how I thought about functions. I tended not to think about checkout as being a transformation from one stuff, one thing to another thing. I just thought about it as being something I did. So my second revolution is this idea that I want to think about everything as a transformation. Everything as being taking my inputs, moving into my outputs, and along the way getting closer to where I want it to be. And that's really useful because you can do really cool things with functions. Now as you know, functions compose. So if I was doing say anagrams, then I'd want to be able to find the signatures of each of my words. And the classic way of doing that is you take the letters in the word and you sort them. So if you had cat, C-A-T, then the signature would be A-C-T. And then any two words that have the same signature are anagrams. So a typical signature function looks something like that. You get the word, just a string. You break it up into characters and then you sort those characters and potentially put them back together. So here we have, we generate, we use code points because it's a new GF-8 and that's obviously way more complicated than it has to be. And then we sort them and we join them. And being sort of like programmers would say, but wait a minute, we've got all these variables we don't want to use and we might be tempted to rewrite that like this. And the problem with that is that we've lost our functions. We've lost the ability to see what's going on in terms of functions because we have to kind of read this from the inside out. Think about someone and you're trying to explain, let's say you're a grandmother and you're trying to explain what this does, okay? And you say, okay, so what you actually have to do here is you have to go along and find the last function call in this and you recognize the last function we call because it's gonna be the thing just in front of the last opening round bracket. Got that? Now, what follows that is gonna be the parameters for that function. Okay, that's easy. So you're gonna go and you're gonna get that word and you're gonna pass it to the code points function and it's gonna change the value. Now, what are we gonna do? Now, remember that last open parentheses? Now we're gonna skip backwards. We're gonna keep going, keep going to another one. And we're gonna take, whatever the value is from code points and we're gonna pass it. So it's sort of, okay, and he explained like this and explained like this and either we give up or she walks out, right? Because it's crap. It's really complicated. It's unnecessarily complicated. It obscures what we're trying to do. And so elixir, among that, a whole bunch of functional languages allows us to write that instead. Okay, so we can pipe a value through a function and then we can pipe the result of that through another function, et cetera, et cetera, as much as we want. So here, our word is being transformed by code points, then it's being transformed by salt, then it's being transformed by joint. Obviously, the pipeline is nothing more than some syntactic shift, right? All it is is a transformation from that form into the previous form. Nothing better. And yet, it is a profound difference to the way we write code. Come back to the idea of a shopping cart and check out being a transformation and a part of the payment into an order. So you can imagine that we're expressing that here in a pipeline. And we can think about our code purely in terms of transforming data, purely in terms of functions. If you start doing that, it also changes the way you think about programming. So it allows us to express all of our work in terms of transformations. We're no longer telling the machine what to do. We're describing what we want done. And that's actually suckable, it's a big difference. We're separating our data and our functions and we're looking at our code in terms of the flow of data and the transmutation of data by lots of different functions. And that allows us to code way, way closer to the actual problem than that. So, two things. We have the idea that pattern matching is a form of parsing. And we have the idea that functional programming is a series of transformations. So what does that get us to? Well, it gets us, I don't know is the answer. But I want to try and show you where it's got me. And this is where I'm going to get into the kind of like, you know, Dave's crazy part. So forgive me. Let's start with a simple function that looks like this. Okay, so we're getting some kind of response probably like HTTP response just given the numbers. Now it's got a code and a message. If the code is 200, we do nothing. If it's 404, we say not found if it's 500, we do something else. Okay, perfectly valid function. No one could criticize that. Does one thing relatively straightforwardly, no problem at all. But let's think of it instead in terms of parsing and transformation. We could rewrite it like this. Now what we have is one function. Actually the equivalent of that top function, we have one function which will respond appropriately given whatever that code is. Are you going to say, oh wow, that is clearly a better way of writing it in the first form? No, you're not. Because it probably isn't at this point. But this is all part of a build-up, okay, so trust me. So what we've done is we've removed logic from the body of our function and instead turned it into a very, very trivial parse. We're parsing an HTTP response by matching different things in our function heads. We've replaced imperative code with parsing. That's the whole point of this slide. Now, I get really bored writing all these death things. So just as like to save myself time, I actually wrote a little macro called mDeaf. My God, I love the fact that I got a language where I can do that. And so that's actually often the hex if you want to use it, it's called multi-death. But it basically does the same thing, if you say mDeaf format response, it just generates those three function heads. Okay, now it's getting a bit clearer. I kind of like that. So here's a slightly more complicated case. Again, perfect in reasonable code. We want to read a line from a file. So we're going to open the file. Now, the file will open returns a tuple that says okay and there's some kind of device IO for me and in which case we can read from it. Otherwise, we're going to have some kind of error. Yeah, so that's again a perfectly valid, nice little self-contained function. Now, you could argue it does two things. So maybe it should be an error. But I want to think about my world in terms of pausing and transformation. So here, my first step is to say, well the return from file.open is something that I can pause. And so I'm going to split that out. I'm going to say that to read a line, I am going to call my little helper method, underscore your line, and I'll pass it the result of opening a file. And then in my helper method I have a pause to say it's okay, I'll do one thing. If it's not okay, I'll do another thing. Okay? So here, again, I've replaced imperative logic with declarative pause. But I can take that a step further and make it obvious what I'm doing. By using a pipeline to show the transformations that are taking place. The transformation is I'm taking a name and then transforming it into the first line of a file with that name. And by writing it like this, I'll make that explicit. So I have my transformation going that way and I have my pause going down. This is getting interesting, at least to me. Excuse me for doing this. This is actually a code from IEX. It's the code that loads the .iex file, which is the one that does all the initialization if you want to customize your colors and everything else. So what it does is you can pass it explicitly, the path to your IEX file, in which case that's the one that it loads. Otherwise, it's gonna go and look for either a .iex in your current directory or in your home directory. So, first of all, says if you give me a path, then the only candidate is that path. Otherwise, I'm gonna take all of the known names and expand them into a full path name. Then I'm gonna find the first one of those candidates that actually exists as a file. If I can't find one, then I'm gonna return the original conflict. Otherwise, I'm going to merge in that new IEX file into the conflict that gets passed in. Again, an okay function. But there's a fair amount going on in there. I've had to write, or somebody has had to write, some if statements and some conditionals and blah, blah, blah, blah. Can we do that differently? Well, we could do it using transformations and parses. So, it's a little bit like .iex file. The fundamental thing we wanna do is to take a path. We want to go and use that path to find the possible IEX files, find the first one that exists, and then update our conflict from there. That's pretty straightforward. Express is a transformation like that. It's really obvious what we're doing. I think it's easier to read than they want with if statements. Now, to get our IEX files, we're parsing our input. And it's a trivial parse. Our input could either be a name or it could be nil. So, if our input is nil, we're gonna return the list of possibles. Otherwise, we're going to, I should have gotten spammed it all. Otherwise, we're going to find the path. And then to find the first, we just use the existing code. And then finally, we're gonna parse the response of finding that first one. If we find it, we're gonna merge it into the conflict. If we don't find it, we're not. Again, we're replacing imperative logic with pattern matching and transformation. We're beginning to see a pattern here. I think that this is worth exploring. My current lockdown parser is, I think it's about 1,200 lines of code. It has a total of, I think it's 11 imperative statements in it. And some of those I'll have to deal with now. I mean, I could probably get rid of them if I really, really wanted to get rid of them, but it turned out to be actually, I think my search will leave them as imperative statements. But even so, 11 imperative statements have a total of about 1,200 lines of code. And once I've worked this style out, once I've worked out that this is how I wanted to do this, it would believe how much simpler it got, how much easier it was to do. I needed to add, I released the first version and then I wanted to add table support. And it was just the modularization that comes from the parse was so beautiful. I just said, okay, to recognize a table, I have to parse a line to recognize one. A table has to have two successive lines that have the same number of vertical bars. So I can do that as a parse. The whole thing just fell out. It was like half an hour's work. I was expecting that all day, it was about half an hour. It's a phenomenally fun way of coding. It works on all sorts of scales. For example, maybe we're writing some kind of back end service, rest service, and we're doing some kind of site that does deals or coupons or whatever else. So I'm a particular user, or I'm doing it on behalf of a particular user, and they come in through some kind of rest space and all the tokens, so I think I'll find the user, and then for that user, I have to find their local offers, the ones that are around them, and then the kind of national offers like chain stores that are given to discounts or whatever else, and then I have to get those responses, merge them, and send them back down as a response to my rest interface. So this is kind of relatively straightforward stuff when you want to serve it. So I want to think about this now, not imperatively, but I want to think about it in terms of transformations and parsing. Oh, I'd just make it interesting, I'd like to make those two asynchronous. So I'm going to send off the two responses, the two requests to find the local offers, and then we can vote to come back before I send a response down. So it's going to look something like that. All right, this next slide is, I've got to say old slides, this one's ugly, but I still like to think about it a bit, because I can express all of that using pattern matches, and using a few simple transformations. I'm just showing the pattern match part there, I'm not going to show the actual functions to do it, but let's assume it looks something like this. My response comes in, my request comes in, and the little front end thing converts that into a map where it says get offers, and here's the authorization total. And this is probably going to get handled by some kind of gem sort of component somewhere. They're just sitting there handling these things. So the first thing that's going to do is match on that particular map. Actually, that is not just a Christmas default with that door. And the first thing it has to do is find the user. So it's going to respond by saying, okay, so now what I'm doing is getting offers, and I have a user. So it's going to go look up the user from the call and stick it into that map. So now we have a map with two elements in it, one is the request type, one is the user. So now we have a different pattern match. So I'm going to come in now with a second pattern match. So we're just looping around, calling ourselves, trying to pass this response. And we've made our input more complicated. We've added things to our input, just like we did with the run length and coding thing. We've added the user to our input. So now we match that. Now we're going to call two functions, async.local and async.getNational. And they're going to go off and do their business. We're not going to do anything else. We're not recursively calling ourselves at this point. They go off and do their business. And at some point, they send the responses back into our server. And our server is going to use those to update its state, update its input. Now, if the national comes back and says, hey, there was an error, then okay, we're going to throw the whole thing away just to respond with an error. Similarly, if local comes back and says there's an error, then we're going to throw it away. Otherwise, we just need to pattern match the case where we actually have national data and we have little data. And at that point, we can send the response back. If I could work out the time and whatever else, I would love to see some kind of server implementation based on those type principles. It's a server implementation where it gets declarative. Where the rules are obvious. And where what we code is what we want it to do and not all of the little kind of details of happening in coding. Now, this is not new. It's one of my real pet peeves is that as an industry, we have no memory. We keep reinventing things that we've done before. But this goes back to the 1960s. Cobalt programmers used to use decision tables, which were kind of a variant on this. Staying machines again in the 60s are a variant on this. And Glutonous limba, Blackboard system and all the others like that are also a variant in that you put data into the system and then you've pattern matched in order to generate an output. So this is not new. This is nothing, you know, totally radical. But for me, it isn't. For me, it's a different way about thinking when I'm writing code. And I am finding that incredibly powerful. Once you think about things this way, you get all these extra benefits too. It's easily made in parallel. Effectively, you're writing a DSL. And if you think about that previous slide with the maps, that could be considered to be a DSL, but you could wrap that in a really nice little DSL for specifying just as well if you really wanted to. You get phenomenally granular reuse. Because each function, I mean, typically when you're writing code this way, every function is one-on-one. So if you want the ultimate granular reuse, that's it. If you're into testing, clearly you're easier in testing. Because you can hook around with everything, like I said, it's one-on-one, you can test just that one thing. Really nice and easy to do. And you also get control of error handling. I think Bruce, you're going to talk about your type language, right? Oh, okay. But I mean, you can actually start codifying error handling. When you wrote things as a clause, then there are easier ways to handle errors. You can also start thinking about error recovery. I mean, the Pouser literature is full of ways of handling error recovery in a graceful way. We can start using that in our code. Yeah, it's kind of interesting. So I'm not saying throw away your infestations. I'm saying I have. And I'm saying it's really interesting for me personally to have done that. And I'm having a blast exploring or re-exploring what it means to permit. I don't think I'm advocating this as a universal solution for everybody that wants to code. But I'm saying the kind of changes that have its role to me have been so beneficial and so refreshing, I would like everybody else to experience change too. Do it for yourself. Find ways for yourself to think about things differently. I think one of the biggest benefits of elixir is that it is a breath of fresh air for developers. It's an ability to have a fresh start. We can think about things in a different way. And I think if we do that, the most important, if you like, we are a kernel around which this is going to start forming. The 105 people in this room are the starters of it. And we get to set a model direction. When something is so young and so small that every little input is going to be significant, we get a chance to do that. And that's fundamentally exciting. So here is my plea. Some of you may come from the Ruby community. Please, please, do not give us another Rails. Rails works. Rails is cool. So if elixir doesn't have a Rails, therefore we need to do L Rails or something. No, you don't. That's boring. That's being done. And it doesn't solve a problem. Maybe you come from the early on community. And you see this as a way of a better or easier syntax for creating the same applications you'd be creating with Rails. Again, I would say, please don't do that. This is the opportunity we all have to throw away a whole bunch of stuff. We've got a free restart. And that is so cool. So instead of reinventing the past, let's use what we have here. Let's use the power of what we have here to invent a new future. Let's think of cool things we can do with this opportunity. And let's make a difference. And while we're doing that, let's just remember to have fun. So thank you very much. So we have a bit of time for questions. Bruce. Let me throw you a bit of a current ball. In seven more languages, I'm looking at some language evolution, which I think is very much going in this direction. One of the things that, one of the languages is Idris, which does dependent typing. And it seems like the whole dependent typing style maps onto this approach very well. Where basically within your parse transforms, you capture the intelligence about those transforms in your typo. Right. Does that make sense? It does. And I think that for a certain, I'm going to say this, for a certain class of persons, that works well. However, I think once you start getting into dependent type systems, you're only a small step away from the moment. And I think that that's going to be in the depth now. There is a gorgeous, gorgeous, O'Cammel markdown parser. I mean, so it's not a full markdown parser, but it's closed. And it uses a type system to do exactly that parse. And that's really nice. However, it's also not as flexible. Because you can't do all these things like, you know, this is a line of underscores that has to be at least three long, or whatever else it might be, right? These things get tricky. So, I may be wrong, but I find that the type system-based ones are less flexible, and they have a tendency very rapidly to become academic. That's really good. Yeah, I completely agree. And I want to say this might not be the sexiest topic that you've ever given, but these are the biggest thoughts that I've seen when you talk. It's tremendous. Maybe it's not the sexiest topic. I'm worried about that. Anyone else? So, you really like the pipeline operator. I was wondering, so closure has a simple operator, but they only say it once at the beginning, because closure is this point. Would you find that syntax less obvious than some pipeline, but it only leads to the pipeline operator once? Since we seem to be chaining a bunch of them together, it's a semi-remembered way we're using it now. I think right now, maybe just kind of used to it, but I've developed a layout style where I put a pipeline in the beginning of line, so I'll start up with something, and now I'll have pipeline thing, pipeline thing, pipeline thing. And I've come to like that style for two reasons. First of all, it's significant when you look at the code. It stands out. I mean, seeing lines at the start of those characters, I mean, they do kind of jump out of that. So it's easier for me to see the structure of the code that way. The other reason I do that is it's way, way easier to refactor my code if I do that, because each line is a self-contained transform. And if I want to add a transform, I just add a line. If I want to get a line out or change the order, I just swap the lines. So in that respect, it's, for me, it works really well. Now, the closure way of doing things, if you lay it out in the same kind of way, you can do the same kind of things. So in that way, it's kind of like, you know, which particular flavor should it be like. But I kind of like the Elixir way of making it explicit. And then that question, I know you were involved in writing on a document. Are you involved with Exxon at all, or is that something you made? Not really. That was a tool, though. Because you do that. Exxon? Exxon? Yeah. Do I write what, sir? Do you write a lot of box vehicles? Yes and no. Do I write a lot of box from my code? If it's to be used publicly, then I will document the APIs. Because I'm a well-maintained citizen of the community. And if I'm doing something like not obvious, then I'll definitely document that. What I try to do increasingly is to make my code document itself. So for example, I'll go back to it, but that slide right to the IAX thing and bring it into functions with names, to me, helps document that code. And so that's the kind of approach that I would like to take. My personal thing is that once you start putting comments into code and documentation is just comments, once you start putting comments into code, then you now have two things to maintain. People being people, I pretty much guarantee that the comments will be out of date well before the code stop being used. So if I can make the code document itself in a way way better off, it doesn't matter. Is that the time? Yeah. Alright, thanks everybody. Thank you.