 Good morning. I'm really excited to do this talk because it's actually the first time I can give a talk about Elixir and assume that the audience know what Elixir is. So that's really great. It's really a change in pace because usually I'm going to other conferences and I'm always giving the introductory talk, what Elixir is about, what are the language goals. So this talk is not about that. This talk is about Elixir, past and future. I think it's kind of expected to exactly talk about from where the language came because now we are close to each one, right? And then there will be a lot of, and I think there are important lessons, things that happen throughout this process that we could share that will help the community grow together. So if we want to talk about Elixir past, one thing that we could do is that we could go to Elixir timeline. So this, I got it from GitHub. So on the vertical we have the number of comets. And I think it's kind of per week and then we have the whole year of 2011 in there. And the first commit was right at the beginning of 2011. It was like 9th January or something like that. But I actually want to rewind a little bit more. I want to go a little bit back before 2011. But not too much. It's not about my birth or anything like that. So I'm going back to 2005. And I choose this article, The Free Lunch is Over, because it was about this time that I was personally starting to get aware of the changes that are happening. So The Free Lunch is Over is a paper from Herb Sutter. And basically what he's referring to Free Lunch is not about this conference. They still have Free Lunch, so don't worry. But basically what he was talking about is that throughout the previous two decades or even more, you wrote Sutter. And then you could just wait like two years. And your Sutter would run twice faster. That was amazing, right? You didn't need to do anything. Just wait. And damn, it's faster. But when we heard this story already a couple times now, it's almost 10 years since that article, that our machines are not getting any faster now in terms of we don't have machines with 8 gigahertz, the CPUs. And we're starting to have more and more cores. So if we actually want to leverage all the capacity of the machine, it's not just waiting anymore. We need to change the way we write Sutter. So The Free Lunch is Over. And then other important things happened. For example, in 2007, we had the Programming Airlang book published by Pragmatic Programmers by Joe Armstrong, which is one of the creators of the language. And I have it here because it was when I first started to hear about Airlang. It brought Airlang into other communities. And in particular, it brought it to communities I was involved with. And then another event that happened in these timelines that in 2009, we had a Rails release that said that Rails was thread safe. And the reason why they did that is the Rails cartoon they did that is because if you were around Rails community at that time, there was a pressure at that time on the Rails cartoon that we need to make Rails thread safe, exactly because Rails developers wanted to leverage the ability of using all the cores on the machine and use the machine efficiently. And one year later, I joined the Rails cartoon. And I actually found out that Rails was not really thread safe. That's why I put in between quotes, because I was constantly fixing bugs. And there was actually many reasons not going to go into the tales that Rails was not actually thread safe. So I was working on fixing those bugs. And it was kind of frustrating. It was kind of hard. And it was about that time that I started to put the pieces together. So if I'm doing this work and it feels hard, it feels frustrating, but I know that concurrency is becoming more and more important. And I know that there are languages, like Erlang and many other languages, that solve this concurrency well. I need to do something. I don't want to, we need to see ways that I can make this situation better. And then I started to study, learn, play with other languages. And for all this process, so I was reading many books, trying to get ideas from different places, I came, I found this book, Seven Languages in Seven Weeks, by Bruce, who will be speaking later today. And I was actually familiar with the majority of the language in the book. But the thing that really stood out in that book is that it got language like Haskell, Scala, Clojure, Erlang, and a few other more. And it was talking about those languages and also their concurrency models. But it was, to me, the book really put them in separate places and had, OK, so these advantages of the approach followed by this language, here are the disadvantages, here are the tradeoffs. And after I read the book, what really stood out was the Erlang Brutal Machine. I was saying, I want to write software that's going to run on this runtime, on this ecosystem. And so that's the lesson I got from it. And the way I like to say is that I liked it. So I went, bought more books on Erlang. I actually also really liked Clojure after I read the book. So I went to study Clojure too. It kind of shows later in the language some of our features. And the way I like to say about when I was studying Erlang, writing software in Erlang now, trying to put some things in production, is that I liked everything I saw, but I hated the things I didn't see. And at first, the things I didn't see, it was a little bit unclear. But I decided, OK, so I want to try my own language, just for fun, to see if I can get some of those ideas, some of those things that I'm missing, if I can get it there and see how it's going to play out. So that's how I got the first commit. And it was quite active. You can see the first four months into April, it was actually very active development. OK? And so here's the interesting thing. So I think very few people know about this, probably two or three, is, so here's how Elixir was as of April 2011. You could actually define objects with something called death object. It had a prototype-based object model, like JavaScript or self. And because I wanted, one of the things that I was missing was metaprogramming. I actually had metaprogramming with eval everywhere, or just passing strings and evaluating them. Like, say it's evil, evalware. So because of how it was designed, it was like slow, extremely slow, because every time we wanted to call any function, we basically had to go through the whole prototype chain. And I was even able to broke some of our link features, like hot code swapping, the ability to upgrade code live in production. And so this was my spike, right? This was the end result of formal work. And then you can see here that after April, there was no more commits, right? Because I was in the Depression Valley. I was looking at my code and say, well, this sucks. It's really horrible, formal work. It's not good. And then there was a period of one last hope where I thought, well, I can make this work. And then you can see that, nope, I cannot. But the good things that I knew, it sucked, right? That it was not good. And the reason for that is because this is what I built, right? I was basically trying to get the ideas I was familiar with and just bring them over and try to slam them whatever way I could, right? So, oh, I guess I can fit it here, bam, right? Just try to make those things fit. And that's why we got a bad result, right? So, but the good thing is that when I was doing this process, I was actually learning the shapes, right? I was smashing, and then I noticed that, well, this doesn't work quite expect, right? And I was learning the shapes, and maybe, oh, maybe there could be something else that I can put here. And it was from this process that we came up with the language goals, OK? So it was from this process that I stopped and I said, OK, I want to have, I'm doing evolve strings. Why do I want that? Oh, I want it because it's metaprogramming. I can write code that writes code. And why do I want that? Well, because it can make me more productive, right? I'm having code that's doing work for me, that's generating code, OK? So I think productivity is one of the language goals. That sounds reasonable. So why do I want objects? It's because I like to design myself with objects, or there is one property from objects that I'm really interested in. And then I found out that I was actually interested in polymorphism. I really like the idea of being able to say, you can give me any object. That's what we do in object-oriented languages, right? You can give me any object. And as long as it implements those methods or those contracts, OK, it's going to work just fine. So OK, I want polymorphism. And why do I want polymorphism? Oh, because I can write extensible software, right? I can write a software that can work with huge variety of objects or data structures. And it's going to be extensible. And then I came out with the goal of compatibility because it doesn't make sense to build a language in Jardin Virtual Machine if I'm going to make it extremely slow, or I'm going to break important features provided by the environment and the ecosystem. So what happens is that after I had those goals, I had a tinkering period. And this is the period that because I would not be able to sit on a desk and say, OK, I'm going to design this thing now and it will be ready in eight hours. So you have to read something, and then you're excused. So it's the tinkering period where you do some research, and then you completely let go of the subject. You don't think about it. And then you're reading something, and then that idea comes back up and say, oh, this could lead us somewhere, right? And then you do a prototype. And then it doesn't work, but it's OK. You just let it go. You go to sleep. A lot of rest is extremely important. And so I had all this tinkering period. And then we see a small spike in October. And this is where we had Lago Lang. Basically, it was just a specification. It was not a language that was implemented. I was traveling. I was visiting a friend, a hood of cats. And then I was telling him what I was thinking about a language design. And then we sat down and tried to do a couple of things together and experiment with a couple ideas. And what we wrote as Lago Lang came to be the basis, the foundation for Alex here. So basically, what happened with Lago Lang is that the conversation I was having with Yehuda was basically, OK, I want to have a language where I can do metaprogramming. And I found out that macros are extremely flexible. And I would like to play with this idea of having macros in a language. And those macros, they are Lisp macros. And because they are Lisp macros, we see them in Lisp language. But how can I have the question of how can I combine Lisp macros with a natural syntax? And even after that, how can I guarantee explicitness? So how can I make metaprogramming? It can be very flexible. And because it's flexible, we want it to make as explicit as possible. So the idea that we came out was basically, and you can see it's the foundation for Alex here, that we came up with an extremely regular syntax. So everything was a function call. So add one and two. And so we have the function call, and below we have the representation of that function call. So we have function calls and things that are literals, like numbers, right? One and two, they are literals. And OK, and then everything was written in this way, like think everything. If you defined that module, you would have parenthesis in there, parenthesis in if, everything. You had to have parenthesis everywhere. And then we wrote the call and say, OK, the syntax is extremely regular, right? The representation is extremely regular. But it needs to look more natural. So what can we do? So OK, let's make parenthesis optional, right? Because now I can define a module without having parenthesis around. I can call if without having parenthesis around, case and so on, OK? And you can see that we added this syntax sugar, right? But the representation for the code stays the same. It didn't change. So in the next steps, like OK, let's add operators, because I don't want to be writing them in the prefix notation. So we have operators in the language, but the representation doesn't change, right? This is to the function name, which in this case is the operator plus, right? And the arguments, which is one and two. So after we had this foundation, right, we could think about metaprogramming. And that's why today we're able to write a code like this. I can have quoted expressions, right? That returns me the code representation. And I can basically generate all sorts of things in there. And here we can already start to see some of the explicitness, because quote and unquote is very explicit. If you ever use some list, you actually quote and unquote. And it has shortcut syntax, right? Sometimes it's a back tick, sometimes a comma, just one character. And I said, no, I want to have it explicit, right? I want to know exactly where the quote starts. I want to know exactly where I'm quoting to not send the underlying message that macros are flexible. And because of that, we need to use them very responsibly. I don't want to have a shortcut syntax to send a message that you just type that on a whim and let it go. So it needs to be explicit. And then we took it even further, right? So now every time you want to use a macro from another module, you need to require that module before. And when you require the modules, always going to be visible in that same context. So we'll never have a way, we'll never have a way in a language where you can actually inject macros globally. You cannot say you could go to your mixed file and say, OK, I want to have this macro that's going to run on all modules. We had a discussion for that. We are never going to have that. They'll always be explicit. I can always see the require or a use in that module or in the import. I can always see that I am depending on that particular module that provides macros. And I think we are able to reach a very reasonable set of trade-offs. I like to contrast this, for example, with parse transforms that we have in Erlang, for example. Because in Erlang, the parse transforms, you can, for example, inject them globally. You can go through the command line and say, when I'm compiling all this code here, I wanted to go through this parse transform. And the parse transform actually receives the code of the whole module. So after you add a parse transform, you don't know where it's working, what part of our code it's changing. And then there is a explicit, and the macro can only change its argument. It cannot change the surrounding environment. So we've got a very good set of trade-offs. And I was very happy with the result. And that's why we have this spike. So wait, now I think we define the basic syntax. And now I think it will work. And the nice thing is that since then, we never had another depression valley. So this is from 2012 forward. We probably just have now all committers are ethylxerconf valleys. So it's a really good valley to have. And then, so code-wise, we are still working actively on the language. And we can see from this graph, but not code-wise. A bunch of interesting happens since 2011. So for example, in January 2012, right after that spike, that spike happened on my holidays mostly, the end of their holidays. So that's the advantage of living in Poland, because the winter, at the end, they are so cold that you don't want to live home. And then you can cold a lot. So after that spike, I went to my company co-founders, and I did a presentation to tell them, you know, you should let me work on this project full-time. And they bought it, right? And yes, the biggest prank I ever pulled. It's less than two years and a half already. So but basically what I said that, what I did in this presentation, is that there was, in my opinion, just one language that had everything I was planning and I could see in Alex here. And this language was closure, basically, because for good or for worse, closure is a dynamic language. It focus on productivity. It focus on extensibility. And it focus on concurrency. So those are the four things, right? And it's exactly Alex here, right? Alex here is a dynamic language. It focus on productivity, extensibility, and concurrency. So I told them we have just one language going into this action. And it's running on the Java virtual machine. So we've got to have another option that works on, not on the Java virtual machine, in other places, right? And the other virtual machine is an excellent candidate for that. So that's basically what I told them. And they said, yeah, definitely. Let's try this, right? So the company came behind the project. We did the beautiful logo. Our designer did the logo. I got some WordPress templates and did the website. And then we launched it. And it's the website we had today. We improved with time. But it's basically the foundation for our website. And still that year, it wasn't we launched the first Alex here version, right? So we got about May 2012, Alex here is 0.5. And then in September 2012, I did the first Alex here presentation at the Emerge Language Camp that happens in Strangeloop. And it was really good, because at the beginning of the year, I set that as a goal. I wanted to take the language to some place. And then I would go there and talk about it and get some feedback. And actually, there was someone that heard about Alex here in this event from me, OK? Yes, and it's really nice. Did someone can remember watching this presentation or hearing about Alex here from back then, OK? Yeah, so it's really cool, because I did the presentation there, and I got really good feedback. And then I was, OK, I'm going, this is working well. And then in the next year, two really important entries in the timeline, which is basically in May 2013, Dave Thomas announced programming Alex here. And right after, Simon St. Lawrence, he announced introducing Alex here by O'Reilly. And this was really big, because it was at that point that to me, the language, it gained critical mass in the literal meaning of the word, which is there was enough happening in the community and around to justify the language itself, OK? Because up to that point, we were already as a company with, I guess at the time, was between 20 and 30 employees. We were investing already in this project for a year and a half, right? It's a huge investment. And I was uncertain, right? Is this really going to go somewhere, right? It's one year and a half of my work, and it may not go anywhere. But when I saw that there were people like Dave and Simon and that they were betting on the language and they were writing about it, OK? I said, yes, this is working, right? And then we had, since then, a lot more books were announced. We get screencasts now, later, a little bit later, Eric joined the language and is helping build the language, too. We had a whole track at AirLink Factory. We are having Elixir Conf now. So it's going really, really well, right? And the nice thing is that after all this time, OK, we are two years after the first release at 0.5, two years and a little bit. The language goals, they didn't change. The language goals, they are exactly the same. We kind of change what we use to represent the language goals. So now when we talk about productivity, I don't like, I say now that productivity is first class documentation, for example, because you're not going to be productive if you are in an environment where nothing is documented, OK? So that's why documentation in Elixir is easy to read. It's easy to access, easy to write, because we want an ecosystem that's really focused on documentation because that brings productivity at the end of the day, OK? It's a very good tooling, right? I want to install Elixir and be ready to start working on my project right now. I don't want to have extra steps. Good test framework, good interactive Elixir shell, OK? And now we have Hex packages, Eric did the presentation, which are extremely important for productivity. I want to depend on something. I just add a tuple with two elements in your mix file and run a comment, and it is there, right? You are ready to work. So extensibility, I move with macros from productivity to extensibility because I think having macros in productivity can kind of send the wrong message that macros are to remove code duplication. And that's not what they are about. Actually, if you are thinking on code duplication, probably the best solution to that is a function. You don't need macros to reduce code duplication, OK? So macros is about extending the language. It's getting the language from and bringing it, extending it to a particular domain, right? That the language is not aware at first. Like testing is a good example, or Acto, which is a tool that can communicate with databases. It's basically extending the language to domains and being able to write our own data types and extend those data types with protocols, right? Which is how we have polymorphism. And compatibility, right? So after that shock, after that LXDA that had the object and things are really, really slow, at first I was like, OK, we don't touch things related to the virtual machine, right? That's sacred. We're not going to touch that for now. But with time, we got the maturity to say, you know, we had the concurrency there, we had the distribution, but we're not only going to embrace it, we're going to extend it, OK? So the goals are the same, but we kind of evolved in that areas and came up with our own ideas to represent that goal as a community. So today, where are we today, right? So that was the past, where are we today? We are at Alexer Conf. The current version is 0.14.3. And it was a very important release because it said there are no more planned backwards incompatibilities. But the keyword here is planned, OK? Because we may have unplanned backwards incompatibilities. And we kind of, I think, in the next release, are going to have two small deprecations, but they are quite small. And the next release, we plan to be exactly Alexerio 0.15 because it's going to introduce the logger. So today, when you are building stuff with OTP using the GIL server, sorry, using GIL server in an event when you create a process and it crashes, it prints everything in Erlang terms, right? So with the logger, we are going to have a very good logging API. And not only that, all the reports that come from Erlang, they'll be translated into Alexerio. And we have a couple of penny issues left that we'll probably fix through 0.15.0, 0.15.1. They are minor ones. And then we'll go to 1.0, right? My current time frame is that we have 1.0 now in August, which me now is the best time to jump in, right? We are almost there, OK? So that was the best. And then today, so let's talk about the future, OK? And this future is exciting because it's the no future. It's not about 1.0 because everything that had to be discussed about 1.0, we already did, right? We already, we have the Alexer core manilins where we discussed the language development and for a year or even more, the majority of the language features, they are discussed with exhaustion in there, OK? So everything is planted. There is nothing really unknown to 1.0, OK? Just 1.0, it's just about getting there. So this is the unknown one. There are about features that may be in Alexerio in a month, in a year, in five years, or never, OK? And the nice thing about this unknown future is that all the progress and research that happens into Erlang, we can use it, right? We'll get it for free. So I want to start with the Erlang part just to give an idea of what is happening there and things that we could explore. And the interesting thing is that we don't need to necessarily wait for the future because there are a lot of interesting features in there today that we don't use fully. For example, tracing, OK? So Erlang provides two functions called Erlang Trace and Erlang Trace Pattern. And the thing is that when you are in production, we have a bunch of processes, right? Exchanging and sending messages to each other. And if something is going wrong and you kind of want to see what is happening there, we can't use debugging tools in the traditional sense because as soon as you pause a process to see what is happening there, right? The whole road around it continues running. So we can have other processes sending messages. But because you paused it, it's not going to get a reply. So this process is going to crash. We can make other things crash. And then by the time you get to see what is happening in this process, the whole environment in the runtime changed completely. So what we do is that we do tracing, right? And tracing Erlang is really, really powerful because you can trace function calls. You can trace things related to the process life cycle, which process died, which process was created, how the processes are interacting with each other. So it's a really powerful tool. And I think that we can explore this in a lot of wonderful ways. And there is already a tool that is starting to do that, which is one called DBG by James. So I recommend everyone to try it out because not that you reach one node and we aim more towards a fully software production, those tools are going to be very, very important. And I'm confident we can come up with a very interesting ways of exploring those tracing mechanisms. Okay, so this is a tool that it's there for a while, but we haven't used it fully yet. But there are things that are there in Erlang and we use it and maybe we should not. So for example, IX, IX is an interactive shell, okay? And don't get me wrong, IX is fantastic, right? They have fantastic helpers. We can do remote shells. We have Pry, which is useful for debugging in development and during testing and a bunch of other stuff. We can access the documentation and so on. But IX is also, it uses Emacs mapping and it's poorly customizable. So for example, and that's because it runs on top of the DLing shell mechanisms, okay? Which is great because it could bootstrap really, really fast, right? I can just use it and then we can have a very good tool really, really fast. But it has its advantages too. So for example, if I type to get the documentation of a module, it's going to print the whole thing and then I need to scroll back up. I would like to have a page, right? Or it's navigate the documentation. Maybe if we, depending on how we're doing, we could even have links working between the documentation. It could be a lot of interesting things. Something else is that there are some times that because you're using the DLing shell, it kind of leaks. So for example, if you hit Ctrl G or Ctrl C, you're going to get different menus and you need to write everything there in DLing terms, okay? I would like to be able to write and see those things in Alexa terms, okay? So this is something we could explore and we could explore in a lot of different ways. Maybe one way to explore is to extend the shell that comes with our DLing and OTP to allow us to customize those books, right? And to give us a little bit more flexibility. If we say that the shell is so important for our daily workflow, it doesn't make sense to be constrained to one editor, right? So we should have maybe customizable mappings and things like that. So it's a really nice place that we could explore and increase even more the productivity because they're going to make the tooling better. And so those are the things that are there and we can explore and change, but there's a lot of interesting research coming from Erlang. My favorite is a tool called Conqueror, okay? Already heard of Conqueror, okay? So it's amazing because imagine this, imagine you have like two processes, okay? So you have a client and you have a key value server that receives a key that you store and then you can read this later. So you can send in a message, okay? I want to put this key and this value and then later you can say, oh, I want to get that key I stored and it's going to return the key to you, okay? So this we write this code today and it's fine and then you put it in production and then you say, wait, this key value server is actually a bottleneck in production because there's a lot of people reading all the time and then, okay, I want to do something and then you say, you know what I'm going to do? I'm going to use a ETS table, right? So now every time I want to write a key, I'm going to send it to the server because I still want to serialize the writes through the server and then the server writes to the table. But now every time I want to read, okay, I'm going to read it directly from the table and then you do this optimization, right? And then you put it into production and your code is going to fail, right? Things are not working as expected and the reason for that is because you're not acknowledging the writes, right? So what could happen is that you're expecting exactly this case to happen, you write and then you read, but the key value server, it could be busy doing other stuff, okay, and then the write's going to happen just after the read. So now when you read stuff, it's going to crash. You just optimize your code and now you'll get a crash. Okay, it doesn't make sense. So what Conqueror does is that instead of you waiting these to happen in production, Conqueror can actually find this stuff during development stage, okay? So basically what Conqueror is, is systematic concurrency testing. Basically what it does is that every time you have communication or points where you're sharing, kind of sharing state like ETS, you, it instruments those points and it generates all possible combinations this code could execute systematically, okay? And then that's how it's able to tell you that wait, this flow can happen too and the result you expect is not going to happen in this inter-living, in this combination of events. So you can check more about Conqueror, right? There is this website and the nice things that we could have it in Alexia. So one of the ways we could benefit from Conqueror in Alexia is that every time something goes wrong, it prints a report, so we could start by having reports written in Alexia terms, okay? But we could also think of X-Unit integration. You could think that we could write a test and then you just put on top of it tag Conqueror, right? And when you do that, it's going to automatically run Conqueror in that particular test and that would be really, really, really awesome because we can do more systematic tests, concurrency tests in our software, okay? And there are a bunch of other initiatives in inter-living, right, so you can, and Francesco and Robert, they are here, they're going to speak after and then if you want to know more about the things, you can ask them. Robert is involved in another very interesting project which is SD Erlang, Scalable Distributed Erlang. So it's always a good place to explore, okay? So that's a very exciting future already, right? But we are also going to write our own future, right? What we want. And to show some ideas, so I have some ideas but there are just ideas, okay? Of what we can do, what we can explore. For example, we could have discriminated unions in the language. So imagine that you're implementing a calculator, right? And then you type and then the calculator put one plus two and then you need to parse that output and then at some point to have those tokens, right? I have the token plus with the left number and the right number. I have the minus token with the left, right and the left number and the right number and so on. And that's when we do the calculation, right? The, of those operators that we have. And imagine that there are a couple other places like two workplaces where you need to repeat and match in those exactly same tokens, right? You need to match on them. So what happens is that in the future we want to add exponentiation, for example. What's going to happen here is that when you add exponentiation, you can add to one but forget to add it in the other places where you're matching, right? So you have a bug and it would be nice if the language could actually tell you, hey, you forgot to check this case, right? That's what discriminated unions are about. You could define a union called the calculator operators and then you have plus that receives after right and so you'll define exactly, right? What you want to match and what is the representation of those things that you want to match and now when we go back to that code every time you want to match you can match it like this matching on the discriminated union. So now in the future if you want to add a new operator you just add it to the discriminated union and then you compile our code and it's going to say you forgot to handle this case here, you forgot to handle this case here and you forgot to handle this case here. And the nice thing is that if you have like complex patterns that suppose you want to match that on all of them that left and right there are also integers, right? You actually be able to remove that verbosity too because the patterns and the conditions would all be embedded in this plus minus mode and division from the union. And the nice thing about this is that because the language was built with macros and much of the quality language is built with macros itself, we don't need to wait, right? We don't need to wait about this being implemented in the language. We could go home and write this code today. There is no need to fork the language or anything, it can start as a separate project. So I like to say that from now on, right? The language development is decentralized, okay? Because we can all play and explore if the ideas we want, okay? And we don't need to wait for anything to happen. The foundation is there. For example, another interesting idea is fork comprehensions, we could extend them more. So the way fork comprehensions works today, so here I am saying for every user in users, right? If the user has aged more than 18, I want to get the favorite drinks for that user and return a list, right? With those stopos containing the username and the favorite drink, okay? So this is a very powerful construct. And the way comprehensions work is that we have generators. So this one has two generators, okay? And we have filters, that's filter on what you are iterating, okay? And this is how it works today. And it's, so we had old comprehensions style. The fork is relatively new. And the reason we did that is because we really want to make it powerful. So the users can be any innumerable, right? So you can pass sets there. You can pass lists, dictionaries, and they would all work. You can all, you can all, you can comprehend them, okay? And when we did this new fork comprehension style, we also added a new key, which is into. So let's suppose that to you, this is a set, right? The user will be its favorite drinks. So it can actually say, I want to get this result and put into a set. And you're going to get a set out of it. And this works today, okay? So basically say that everything that you can pass in into, you can pass anything you want as long as a collectable. So a generator is kind of the art of taking values out of things while the collectable is about collecting those values and put it into somewhere. And you can pass files in there. So you can have loops that are right into files. So for example, you can pass you can do into the standard output. And then you have strings coming from the block, right? So in this case, for every liner say, this user likes this drink, okay? And it's printing every time something new comes or every time something is there into the standard output, okay? So this is basically how it works today, but as I said, we can extend, we can take the next level. So for example, we could have ordering, okay? So I actually want to order my results by the user age. And this is expressive because you can see that I'm ordering by age, but the age is not in the final result. Try to imagine how you would write this code if age, if you didn't have this construct, right? You would have to put age in the result and then you would need to sort and then we need to take age off the result. So it adds more steps. And here it's expressive and we can actually optimize and try to figure out the best way of doing this operation. And since I already had order by, well, not group by two, right? It's the next logical step into this. So we could group by the drinks, okay? And this is nothing really new, okay? We can look into, it reminds everyone of SQL, right? Which is a query language with exactly what we are doing here. We're crying the data structures and trying to get information out of it. But we have a bunch of interesting implementations of the same ideas in other languages. So there is a paper from Haskell called Comprehensive Comprehensions that explores the same ideas. Common Lisp has a loop macro that can do all sorts of stuff, like the documentation is like five to 10 pages, really, of all the possible combinations of what you can do. We don't need to go that crazy, but it shows the different ways to approach and there is a more recent package from Common Lisp called Duplas, which again, same ideas, right? So, and again, we don't need to wait, right? We could go home and let's build this new comprehension and see how those new ideas work. Are they really useful? And so you could go home today and say, okay, I'm going to implement my for comprehension and then we'll start to write this code and then figure out that you cannot actually write this. Because in Anexier, we don't have variable arguments. You cannot have a macro that receives whatever number of arguments because here we can have like, or two generators or you can have 10 generators, right? You can have one filter, you can have 10 filters. So this doesn't work, but there is an easy cheat, which is basically to remove the my underscore four. We can just take the underscore out and write it like this and now it works. And the reason for that is because this code is the same as this, right? So now what you're doing is that you're calling my with just one argument, which is the whole for expression and you can transform it to something else and then you can transform it into the actual code that's going to execute, okay? And if we explore this idea, right? If you go into this direction, there are a lot of interesting concepts that we can do. We can do a stream for today. Every time we have a comprehension, the results, they come right away, but it could have a comprehension that returns a stream and the comprehension is going to execute just when you want to actually get the values or even more interesting, it could have a parallel for, right? It's basically you can say, so in this case for every user and if the user is an investor, I want to fetch its profile, okay? And the fetch can be a long operation, right? You can need to reach another service. So this is just going to create the process for you and do everything in parallel and give you the result back. And then there are many different things you can explore if you go into this direction, right? And that's when we are, so this remembers us of the compatibility goal because you're going to be using the foundation for concurrency and distribution of virtual machine to explore those very powerful concepts, okay? And then we can talk about should this unbound, like if I have 100 users, should I create 100 processes or should I have a pool of processes that is asking for users to calculate the profile? How can I have pipelines of data, right? Just going through and calculating things in parallel, okay? And I could go on when original world is talked was almost two hours of talk because I just put all the ideas that came to my mind. But that's not the message here, okay? The message is that everyone here, right? You have your own ideas too of what things you would like to see in the language or what you would like to see in the ecosystem, your own projects, okay? And this is great and that's exactly the idea, right? We are going to decentralize now the language development, the system development, the foundation is there. And the message here is that because we are new as a community, we are going to build a lot of those, okay? We are going to try a lot to bring the ideas you're most familiar with, okay? And you're just going to try to smash that, right? But that's fine as long as you remember that we are doing this to fill the shapes, right? It's a learning process and we need to be careful in this process, right? To not bring, so we need to be careful to bring the good ideas, to find the good ideas, right? But leave the bad ideas out. And for that, right, we need a tinkering period, okay? So let's not forget about this period where we do research and then we completely let it go of the problem because it's not banging your head against the desk that are going to come up with a solution, okay? So there's the sleep, there's the fun parts, there are the prototypes, okay? So we need the tinkering. And yeah, so happy tinkering for everyone. Thank you. I noticed you didn't reference Link from C-Sharp, which seems really similar to the comprehension syntax you propose. In Link, for Link, C-Sharp lets you do it both for SQL, XML, and datasets, which I guess would just be innumerable. So if we had the new comprehension of syntax, would it make sense to change or either wrap Ecto syntax to use the comprehension syntax instead? So the thing about Link, and I discuss a lot with Eric when we were working on the Ecto project, a lot of the criticism against Link is that you cannot actually write a query that is going to be good for data structures and good for going to the database. There will always be semantic issues and you can write a query that's going to be very fast in one extremity is low in the other and vice versa. So where we wrote Ecto, which is about writing cross-interactive to the database, we explicitly said we are not going to have a way to work with data structures. And the reason I mentioned the Haskell comprehensive comprehensions paper is because a lot of people that worked, so Microsoft, they have fantastic research, right? Fantastic research team. And a lot of people that work on the Link, they got the idea from Link and said, okay, how can we add this back to Haskell, for example? So that is the inspiration, is to do the same process they did of taking the ideas out of Link and put into the language. Thank you. No, I just wanted to make Jim run back and forth up the room. So every good community needs a great origin story and we got 90% of it here. What we need is the name. Why is the name? Where'd it come from? Oh, I have absolutely no idea. I have absolutely no idea. Really, it's, yeah, it just appeared and then, well, sounds good. I'm sorry. Yeah. It's easier to Google than Go, that's for sure. One of the things that I always struggle with is we're all good at using and consuming the things that you put so many blood, sweat, and tears into. How can we help grow the community? And I know it's, there are a lot of things that can be done, but is there one place where we can go that it says, here's where we need help. Documentation, anything to get involved and try to push it forward. So, yeah, so this is exactly how, one of the ways you can help, if you're thinking about code, right, is exactly go put values on your ideas, explore it. And if you're always feel a little bit lost, remember the language goes, right, while you're focusing on. So if I start a new project, remember that documentation is extremely important, right? And that's one of the things, remember that macros, they're not your API, so they are flexible, right? They are good to extend the domain, but they're not their API, they're just an expression level. And then we can build the ecosystem around these ideas and I think are going to get a very powerful result of our very flexible, expressive community, a fun community. So that's one of the ways to help. And then there are all the other ways that as you said, right, like code documentation. So how can we make the documentation more accessible? So one of the challenges that, if you go to functional programming language conferences, right, one of the challenges is that, how can we get people to think functionally? And so these there are, we can, how, okay. Yes, we, this is something that we need to answer to, right, it's our problem too. Okay, so we can think about all those ways, how can we think concretely, concretely, functionally? So let's write materials on that, have meetups and all the other ways that we can help the community grow, make more accessible materials and so on. So yeah, at this point, the language itself, it's really, we are getting 1.0 and it's exactly the point that the, because 1.0 can mean a lot of things, right? It can mean, okay, disease, we're not going to change this, but for us 1.0 is like, we got the core to really grow, right? And yeah, so that's how we can spread. This is kind of a question for you and Eric both. Where do you guys see Ecto going? Do you see adding like support for React and other databases and what do you see in the future for Ecto? So yeah, we want to add support for other, other adapters. The things that right now we are both focused strongly on Elixir and Erich Nohex, which are more, are higher in the priority list. But my plan is exactly after 1.0 is out, I can focus more on those tools that I really would like to, we just, we don't have the time right now. And then we hope we can bring Ecto to 1.0, not too far from now. Hey, so I followed the mailing list. It seems to be a little high traffic lately. And then sometimes I hang out on IRC in the ElixirLang channel. I'm wondering if it's time to maybe open up a forum, like discourse or something like that. So it's easier for people to kind of follow with the language and its developments at their own pace. Definitely, I'm not familiar with discourse, but just go to the mailing list that we have there and spin the idea around and let's see because I really not familiar to give an answer what is best or worse, but the point is exactly to, if everyone says it's the best way to consume the information, I don't care for what means, as long as everyone's consuming it in a very good way. Hi, I'm curious about what you think are some of the major parts of the OTP that have not been wrapped for Elixir yet and maybe should be. Oh, that's a good question. So there's this tracing thing that we really could explore. There are releases which already have a tool that is being, yeah. So there's already, and there are releases and there is a tool which everyone should use and try. So we still talk about using it to build RPM packages. And yeah, there's really a lot of stuff. And so for example, testing coverage. So today, so it was one of my original slides. For example, today, if you run mixed test cover, it's going to generate a coverage report, but the HTML for that was probably written in the 80s. So we can really purify and we can probably actually do this contribution back to OTP and have nice reports with consolidated pages. So there's really a lot of things in there that we can explore and try to integrate more. And it also goes on cases per need, right? So I've heard some people that are actually using the telecommunication specific stuff that comes with OTP. And when you have this need, you can go there and learn it and try to expose it to different way in elixir and so on. So yeah, many, many options. I don't have a concrete answer. On the topic of coverage, I just discovered last week that there's bindings for coveralls.io. And so coveralls.io is a website where it's free for open source. So I would cover most of our stuff. Okay. Where it'll, it has mixed tasks for coverage and there'll be, there's HTML reports that posted to coveralls.io that are very nice because I use coveralls for my Ruby projects. Oh, that's nice to hear. It's not built in and it's built on top of the cover library from Erlang, but the reports are very nice. So that could be an alternative to a better reports that we develop. Yeah. Thank you. I recently read through an interview with one of the main developers on Rust. I know you know Steve. But he was talking about one of the things that excited him the most about Rust was the fact that it made use of a lot of fairly recent academic research into language theory. And I think you've done an amazing job at making Elixir a really modern language with really modern features. But Erlang itself, I'm not as familiar with as I am with Elixir. And I know that Erlang dates back to I believe the 80s when they developed it originally and it was open source later. But do you think that going forward Elixir will be able to take advantage of as many things as you would like given the fact that it's still very tightly coupled to some of the things that Erlang itself chooses to implement or not implement? So there could be some constraints. So for example, when we're talking about Parallel 4, there are some Parallel-ize algorithms that would benefit of not copying the data, for example. But I don't think it's... So it could constrain in that way, but there are alternatives. And I would, 99% of the case is to take the alternative and live with the constraints I have today than the opposite, right? To be able to escape the constraints and everything that it guarantees just because I want that 1% to do that. And so, and yes, and there is a bunch of, that's what I said about Erlang. There's a bunch of interesting research happening in there too. It's interesting also because we have RIAC, the database that runs on the Erlang return machine as well. And they do a bunch of, they are very close to, so we have the recall event with RIAC. There's a bunch of distributed research happening in there as well, which I believe that, so I have slides for that in my thought, that's why it was almost two hours long, but that we could actually use on our workflows as well, incorporate some of that research nicely into Elixir, right, exactly because of the expressivity that we can have in the language. Cool, thank you. You're welcome. All right, that's all the time we have. Let's give him a big hand. Thank you.