 do arithmetic with church numerals in Ruby. This is not what I'm going to be talking about, though it is pretty cool. Does my clicker work? Yes. Also, functional programming in a non-functional programming language is a myth. So you can basically skip this talk. If you want to already cut straight to the cocktails, that's absolutely fine. I'll wait a minute for people to leave if they want to. Also, if you are a professional hater, you're welcome to leave. If you see this man later at the bar and he's just complaining about every talk, if you see him complaining about my talk or any talk, just kind of nod and give him a beer. Love you, Tamash. This is me and Hedias Charlie from JRuby singing karaoke in Taiwan. This was RubyConf Taiwan this year. I was going to say, like public service announcement, we're going to do another Ruby karaoke tonight. Charles intensely and I have been pulling this meme, but I made a few phone calls and apparently the one karaoke plays in Gent closed. So the closest would be Antwerp. I guess that's not what's going to happen tonight. But we can still sing songs on the boat. Unfortunately, this is me and Charles singing a whole new world. The song you just heard. I was his jasmine. He was my Aladdin. Also, public service announcement, Ruby standard lib, the Twitter account, take submissions. I know it hasn't been too active. I'm very sorry if you have cool stuff in Ruby core, in Ruby standard lib that you want to show off. Tweet at me, tweet at Ruby standard lib, should be an email. I'll put it up there. Right. So now to the main topic of this talk, functional programming for Rubyists. This is me. I'm Arne Brasseur or Plexus on GitHub and Twitter. I'm originally from Belgium. I actually lived in Gent for five years. It is by far my favorite place in Belgium. Such a beautiful and lovely city. But then with a detour of two years in China and Taiwan, I ended up in Berlin, which is my current base. I'm almost really, really happy to be here. No, not yesterday. Two days ago, Wednesday, the day before our camp, for the second year in a row, we organized Ruby lambic. So there's a style of beer lambic and then from lambic they make creak and hues. These are unique type of beers that you really only get in and around Brussels. There's a long story about it. So we got a bunch of people to get a rented bus with a driver. We visited a brewery in the morning in Brussels. And then in the afternoon, we went and visited some really old bars. And so this bar in particular is 300 years old. It's been in the family of this guy who's 93 years old who's still running the bar. He's serving your drinks, 93 years old. And so, yeah, we were very happy to be able to be his guests. He insisted to take the picture in front of the pig. So, I mean, quite likely we'll do another next year. So keep an eye out for that. We're still kind of debating the exact formula this time around. But that was a lot of fun. I really wanted my good friend Piot who you saw yesterday talking about week grab and other standard lip stuff to join us. But as it turns out, Piot actually prefers Radler over beer for Belgians or other people who aren't really familiar with German beer culture. Radler is basically beer with lemonade. But luckily Piot does have some other good things going for him. He has amazing Photoshop skills. So he let me share this this beautiful picture of Aaron. And I think it's a good a good message like measure your stuff. Maybe something to keep in mind for this talk as well. Oh, and also talking of Piot, week grab is cheating. Because it's written in pure Ruby, but it uses this object space week map, which is definitely written in C. So it's just eight lines of Ruby that delegates to this week map. So, yeah. Yeah, it hasn't always been the case. Fair enough. But still, I think it's important to point out. Okay. So, yeah, enough, enough joking and trolling. The story of this talk kind of goes a year back. Last year at JRubyConf, I did a lightning talk about some some patterns that I had been using doing stuff in a more functional way in Ruby. And then I did the lightning talk again here at our camp. And I got a lot of good feedback on it. And I kind of realized that there was so much, you know, in that topic that I could explore. And so on the train back from Belgium to Berlin, I started writing this LeanPop book, The Happy Lambda, about functional programming in Ruby. And I would love to say that now, one year later, I'm here to announce 1.0 of my book. It's not quite the case. Turns out that writing a book is a lot of hard work. But it's about it's about a third finish, let's say it's about 40 pages at the moment. Already more than 100 people paid for it, even though you can read it for free online. So that's amazing that anybody here, like, buy the book. Yeah, so that's a couple hands. Thanks so much, guys. Like, much appreciated. And I do intend to finish it. I'm not going to put a time on it this time, but but I am committed to really finish this book. I've been basically the last six months haven't been writing so much, but I've been thinking I've been exploring. And I think I have a lot of good stuff to go in there. Right, so before we talk about functional programming, let's talk about programming paradigms in general. So who recognizes what this is? Shouldn't be too hard. Just yell it. Close enough. It's not an imperative language called assembler. So assembler is kind of, you know, that's where our program history of programming languages kind of starts, right? Like, you have instructions, there's two things that you kind of notice in this assembler code. You have instructions and you have places. They can be registers, they can be memory addresses, but these are kind of the two components and they will keep coming back when you talk about writing programs. So with these instructions and these places that contain data. And so this is what we call imperative programming. State plus instructions and essentially it's modeled after the machine. So when we kind of figured out how to build machines that could do calculations, we gave them these kind of, you know, mathematical units that could do one instruction at a time. And so that's how we started programming. And from there it went on and we realized that actually programming in assembler isn't that convenient. So we figured that, okay, maybe we can just sort of chunk these instructions together into bigger chunks and sort of reason about it at a higher level. And so you got procedural programming. And then we realized we could do a similar thing with our data, right? Like take these places that contain data that is kind of related and that changes that similar points in time and that needs to stay a bit in sync. And let's stick those into a box that's called an object, bam, object orientation. Now I know I'm not feeling giving full credit here to people. Like really the idea of object orientation, and I know Alan Kay would disagree with me, but a lot of what, you know, object orientation in practice is really this. It's procedural programming with an implicit self. And so when it comes down to it, it's still imperative programming. Okay, so that's fine, you know, and we like that, we're happy with that, but is there an alternative? So, well, yeah, first of all, point out, so what is typical about this imperative programming is that it's place oriented programming. This is a term that Rich Hickey came up with. And it basically means that, you know, you keep all your information in certain places and when something changes, when the state of your program changes, when information changes, you override what's in the place with the new data. And you basically throw away what was there before. The problem is that, you know, for programmers that's very intuitive, you know, we've been using variables, all our programming careers, and that's how we reason about things. But actually human reasoning is kind of different. Like when we think about facts, when we think about pieces of information, then we don't really go back and change existing facts. Like if I say, okay, one day it's 19 degrees and then the other day it's 17 degrees, I don't go back in history and change yesterday's temperature. You know, there are just facts that are bound to a certain point in time. And later on, you get different facts and the other facts become historical facts, but they don't necessarily become invalid. They're just bound to a certain point in time. Right. So, instead of modeling our programming languages after the machine, we can sort of go back and see what other ways of abstract and formal reasoning that humans have been using to try to specify things in an unambiguous way. Because in the end, we need to tell the computer in an unambiguous way what to do, so we need some kind of formalism to do that. And so we have mathematics, we have formal logic systems, and in particular, we have the lambda calculus, which is a formal logic system that predates computers, but that was basically devised to reason about what can and cannot be computed, about computability. And so in the 50s, John McCarty took that idea of lambda calculus and based his programming language Lisp on the lambda calculus. John McCarty was a remarkable man in several ways. He, several things that we take for granted now, garbage collection, the term artificial intelligence, like it all basically came from him. And so Lisp is a functional programming language, and what makes it a functional programming language are a couple things. So you kind of have the old definition of functional programming and you have sort of the newer definition of functional programming, and it's all about where you put the emphasis. In Lisp, why Lisp is functional programming is because you have higher order functions. So you have these lambdas, which are like functions, but they're also like values, like numbers, like objects, you can pass them around. So you can pass a function to a function and pass a function out of a function back as a return value. And so you get things like map and reduce, which take a collection and the function. And so this is this is kind of the original definition of functional programming had a lot of emphasis on that. And instead of having, you know, for loops with with a variable that updates, you would use recursion. It's very elegant and you kind of keep your state on the stack and don't keep so much mutable state around. But then like when language like Haskell came, we kind of started putting more emphasis on another aspect, which is the pureness and the immutability. So in purely functional programming, all values are immutable, we don't have variables, we don't have places that we can overwrite. We only have pure values that we can pass around that we can derive new values off. But once a value exists, it's kind of set in stone. So these two things, state and instructions kind of come back, but look a little bit different. So we have immutable values, like I said, they're just pieces of information that that can no longer change. And for some things, we find that very intuitive. Like we would be very surprised if, you know, we have two instances of the number 42. And I changed the lower digits of the first instance to four. So now we the first instance is actually 44. And the other one is still 42. Right? Like, I don't know if they even registered what I said, like that's such a weird concept. And so for instance, symbols as well, like you don't you don't have two instances of the symbol foo, or maybe like in with garbage collected symbols, apparently you do. But from a programming point of view, like that is not significant. You don't care which instant it is like the individualness of the object disappears and all that remains is the value that they represent. So for small things, this is very intuitive. But for bigger things for composites, we find this less intuitive. So when we have a hash, when we have an array, we find it very, very normal, kind of expected to work that way that we can shovel stuff into it and rewrite things at certain positions. Same for objects, you have an object with a number of instance variables with a number of fields, you find it very natural that you can override those change those. But it doesn't necessarily have to be the case. And so when you really go to the functional programming route, that's just a rule that you accept. You don't do that. Once something exists in the world, it's set. So that's the value aspect. Then when it comes to instructions, when it comes to functions, sort of the counterpart of immutable values are pure functions. And a pure function basically means that you pass something in and based on what you pass in something comes out end of story. This means that it can't access any state outside of the arguments you gave it. And it can't cause any side effects. It can't interface with the outside world. It can't do disk network, anything like that. If you have those, you know, if it's basically like a mathematical formula, you give it something and then it computes something and end of story. That's a pure function. So no observable side effects. And when you're using these values, when you use immutable values, you already don't have this global mutable state. So you kind of naturally go towards these pure functions. Now, this is all fine and dandy, you know, but they're just like little concepts in themselves. Now, why is this significant? When you start to add all this together, you get interesting properties. And pure functions in particular have the property of referential transparency. Now, what we mean by referential transparency is that if you take a certain invocation, so a specific function with specific arguments going in, then you can replace that invocation with its results. And your programming is still identical. The behavior is still identical. The program is still correct. And again, this seems like maybe, okay, why does that matter? But actually, if we know that a function has that property, then suddenly a whole lot of things become possible. So for instance, we can memorize it. If a function is impure, so memorizing means that you cache the results of computing the function so that when you have to call the second time, you don't actually call it. You can sort of skip that whole computation, you just take the result that you already have. And if it's impure, if it has side effects, or maybe if it relies on global states, or it might return something else in the future, then you can't really memorize that because you're not going to get the same results. Only when it's impure, you can memorize it. And similarly, if you can prove that you're never really using the result that comes out of that function, then you don't have to call it all. So that's the principle of lazy evaluation. Similarly, pure functions are naturally parallelizable. So say that, you know, Ruby, MRI decides that it wants to make your program faster, so it's just going to spin up a bunch of threats, and every other instruction, it's just going to pass out to a separate threat, and then in the end, like, sort of put the results together again, you know, that might go horribly wrong. That will go horribly wrong. And the reason is that these instructions need coordination because they're dealing with a very different state. And so when one is updating it and the other is reading it or the other way around, it needs to happen in a specific order, you can't just parallelize that. But when you know that the functions are pure, when you know that they are reference transparent, then all they depend on is their input. So you distribute the input over a number of threats or whatever, let them calculate the results, aggregate the results again. All is good. And finally, and this maybe is a bit harder to sell if you haven't really experienced this, but I think you can kind of intuitively grasp that it does make things easy to reason about, easy to refactor, basically because everything is local. You know, what you see is what you get. You look at the code, okay, this is going in, this is what it's doing with it, you know, and you don't have to like, oh, shit, like, how did this thing in this singleton now have this value, who on earth, you know, like put a tracer call there, it's, and writing tests become super easy, it's the same thing, like, okay, here's a function, set up a number of inputs, check the outputs, all good. So you kind of see that when we put these different ingredients together, interesting properties emerge. And I love this concept of emergence. It's also talked about in biology, in natural sciences, where you have a number of small components, a number of small rules, and they don't do so much in themselves, but put together the exhibit behavior that is not in any of the parts. So take for instance, water, you know, you have two hydrogen atoms, one oxygen atom, put them together into a molecule, have a bunch of them, put them at a certain temperature, and suddenly you get these very intricate patterns. And the patterns are not explicitly encoded in any of these atoms. It's just kind of how forces interact that these interesting properties emerge. And I think it's the same thing with functional programming. And so immutable values, pure functions, and a third one, which we'll talk about in a bit of purely functional data structures, I think when you put all these together, you really start getting very interesting properties. Now, so if that's not enough to convince you to give functional programming a try, there's a couple other good reasons. And one basically goes back to what we were saying about parallelizability. So up to five years ago, ten years ago, software had this amazing property that, you know, if you just left it alone, it became faster. Like that's, that was actually, those were the good days, right? Like, okay, it's not fast enough. Let's wait 18 months, get a new computer. Awesome. Twice as fast. Good work. That's no longer the case, or not always the case. So once chip manufacturers started going towards the four gigahertz, like clock speed of processors, then they found out that actually you get interference and leakage of energy, and it becomes really hard to, you know, make that economically viable. The challenges become much harder. But more slides actually still in effect. They are still able to put more transistors on the same die. And so they took a different approach and they started making more cores in the same processor. So now, like, for computer, for consumer devices, we're like four, eight, and it's only going to go up. And so now the question becomes, can our software really, like, make use of that? And turns out that a lot of traditional software can't. And so this is one of the reasons why people are getting interested in the benefits of functional programming. But is functional programming alone the answer? And to think about that question, I'm going to refer to an article from 2006 out of the tar pit. Very interesting. It's not too big. I highly recommend reading it. So they basically explore, they say, okay, we are facing a software crisis. Software projects have a tendency to go over their deadline, over their budget. And they believe the reason is software complexity. And then they distinguish essential and accidental complexity. So when you want to solve a certain problem, you know, your problem domain has a certain innate complexity. And so your program is never going to be simpler than that. You need just enough complexity to represent the complex stuff you're doing. But it turns out that a lot of complexity on top of that is just added by the practices, the tools, the way that we do things. And so this is considered accidental complexity and complexity that in an ideal world we would be able to avoid. And so they say functional programming goes a long way towards avoiding the problems of state-derived complexity. So they say that the main reason that you get this accidental complexity nowadays is because you're managing this mutable state. And so functional programming can help you manage that. This has very significant benefits. But also, Kaveat, the main weakness of functional programming is that problems arise when a system to be built must maintain state of some kind. And actually in the original quote it says between brackets, as is usually the case. So very rarely do we get to write a program that does not have to do any side effects or keep any states in the real world. Yes, we do need to keep state. We do need to deal with side effects. And so out of the tarp it says or proposes to use functional programming plus COTS relational model of data, basically how relational databases work. So that's interesting. That's one proposal of how to deal with this complexity. And when you look at sort of, okay, Ruby is almost 20 years old. Right? Like that's in computer years, that's like a millennium. So in the last 10 years, a couple of years ago, we had a lot of problems with this. So in the last 10 years, a couple of new interesting languages have come out that many people are interested in that are gaining followers by the day that sort of leading people in the community are talking about. And I put a few here. Scala is one of the oldest, 2003, and then 2005 F-sharp came along. And then more recently you have Clojure and then just a couple years ago, Elixir from Jose Fallin, which is quite active in the Ruby community before as well. And so what all of these languages have in common is that they take strong influence from functional programming, but they're also multi-paradigm. They are not just functional programming. They combine it with aspects of other languages to hopefully give you sort of an interesting mix of features to get your job done. So out of the tarp it says use functional programming plus relational database model. Haskell has functional programming, but it actually has this imperative sub-language that is just marked by the type system. So Haskell is statically typed, and so it's immediately obvious both to the programmer and to the system if a function is pure or not. And this gives the system a lot of power to reason about what your program does and to optimize. But at same time, don't believe that Haskell doesn't allow you to write imperative code. It definitely does. Some people say that it's the best imperative language around, but so they kind of solve this with a very intricate type system. Clojure has a different take on this. They say, okay, you know, all our data structures, everything we kind of give you our whole toolbox is based on immutability, and we assume that the functions you write are pure. Even though essentially it's just compiled to Java byte code, so you can very much do imperative programming and do everything you can do in Java, but if you use the Clojure data structures, if you use their toolbox, if you write idiomatic Clojure, then you end up with pure functions with pure functional programming. But because of that, they are able to give some neat features which are their reference types, which basically give you new primitives to deal with states and to deal with concurrency. So this is an interesting take, and other languages are starting to take some inspiration from that. So where does that leave Ruby? Except I might do it for time. So two years ago, Gary Bernhardt did this really cool talk, Boundaries, where he's basically already talking about this stuff. When you watch his screencast, I think he also regularly pounds on this. From the inside out, your system should have a core that is purely functional. For all the reasons listed above, easy testing, easy reasoning, all the benefits it buys you. But at some place, at some point, you need to deal with I.O., you need to deal with states. And you can't do that on the outside. So you need to deal with states. Impurity is contagious. So when an impure function is called by another function, the calling function is necessarily impure. Think about why that is, but take from me like that's how it is. Functions that call other functions for the outer calling function to be pure, anything it calls also need to be pure. So you can start at the center and build a functional core, a pure core, and then sort of from around that, you can call into that to deal with your business logic, and then when you get your results, you deal with side effects and state. In particular, he proposes to marry this with actors that you might know from Erlang or Go, which actually Matt's apparently I've heard is also interested in maybe getting into Ruby standard library. So that would be kind of cool. But there's one kind of elephant in the room that we need to talk about, because I said, okay, like even composite structures can be immutable. In a way that's pretty easy to achieve, like say that you have an array of 100 elements, you want to add something to it, you copy the array, you add an element to it, you don't have to change the old one. So that's the way to do it. So that's where you have to create a new environment. If you constantly have to copy everything before you can make a change. So how functional programming languages approach this is with purely functional data structures. So this book from 1999 was kind of the first to aggregate all the research on this topic. And then since then, a lot more research has happened and a lot more research has been done on this topic. So I'm going to talk about some data structures work to be able to give you those guarantees and give you those operations. But with time and performance guarantees, very similar or in the same order of magnitude as destructive sort of added in place containers, you get a technique called structural sharing. So I'm going to show you what I'm going to do. I'm going to add an extra node for a set or a hash map, but we've represented it internally as a tree structure. Now I want to add an extra node to F. Give it a child. So what I do is I copy F, add the extra node to the copy of F, and then copy every node from the top. They look like two separate trees where one has one node more than the other, but in memory they largely overlap. So this is A faster and B more memory efficient. And here I'm using a binary tree just to make this visualizable. So for a tree of seven elements, I had to copy three nodes. So that's not that good. But in practice, they're going to have two yellow trees. So instead of having two children of each node, you're going to have, for instance, 32 children of each node. So they're even for, say, 100,000 elements, you still only have to copy three nodes. And then with some bit-shifting, it's actually very fast to find that path of where you need to be. And so these, when done well can perform really well. So here we have our functional special sauce that leads us into Unicorn and Rainbow Land. But, yeah, I haven't said that much about Ruby yet. So, yeah, what about Ruby? So when you look at Wikipedia, oh, I cut that slide out too bad. So Wikipedia says that Ruby is a imperative, something-something, multi-paradigm, functional, multi-paradigm. So I don't know what half of that means, but it's supposed to be already kind of a functional language in case you hadn't noticed. So Ruby definitely takes some inspiration in particular from Lisp. So the data structures, the array, the hash kind of come from common Lisp. And with that, we got map and reduce. So we have these higher-order functions. We also have freeze, which is going to come in really handy if we kind of want to program more with values. Recently, relatively recently, we got lazy enumerators. So people are still kind of keeping an eye on that whole functional world and what's happening there. But that's essentially it. It's not a whole lot. On the other hand, you know, we've kind of seen these main ingredients that really give you those benefits of most of functional programming. And so when people think about functional programming, they kind of think, you know, the first slide I showed, the church numerals, or if you were here two years ago, apparently Jim Wierig did his Y Combinator talk. You know, this getting creative with lambdas, lambdas everywhere, like that's functional programming. But really, to code in a functional way, all you need to do is start with immutable values and pure functions. And so this is not in contradiction with object-oriented programming. They're complementary. So as I said before, have a core of pure domain logic and handle your state and your side effects outside of that. And so some things are already values in Ruby. True, false, nil, there's only one instance of each. Numbers behave like values. Symbols behave like values. And then a couple things from core and from standard library. They're immutable in Ruby, love it or hate it. But at least when they came up with path name which is essentially just a string, they learned from their mistakes and made path name immutable. So yay for that. But again, that's kind of the end of the story. But that doesn't stop us from doing it ourselves. And people are doing this when you just, you know, quick search around in RubyGems showed these 13 different gems that basically all implement some version of an immutable struct. Ruby struct class, but the result is immutable. So I think it shows that there is real demand for this. And I'm just going to show a couple of gems, sort of my personal favorites. So these are two gems related gems from the same guy, Concord and Anima. They are like struct, but with slightly different assumptions. So here I'm using Anima to make a ukulele class. It has two properties, color and tuning. And so this single line, the single include, gives me all these methods. So I get a hash based constructor, I get equality operations, I get attribute reader, but not attribute writer and I get a two hash method. It's also kind of handy. And so here you can see how I use it. And basically Concord is exactly the same. But instead of a hash based constructor, you get a positional constructor. Now, this gives you attribute reader, not attribute writer, but it doesn't freeze any of the values you stick in there. So you might still end up with an object where you can change an instance variable on that object, but you can change what inside that instance variable. So it's best to combine this with another small but super useful gem, Adimentium. And so here I'm using Concord instead of Anima, but it's almost the same. I'm making a point class, which has an x and a y, including Adimentium. Just by including Adimentium, automatically all my instance variables will be deep frozen. But it also gives you the opportunity to say you want to memoize certain methods. So here I'm memoizing vector length and I'm memoizing this 2a to turn into an array. And so you see when I try to shovel this array, that's not going to work. So small but useful classes, try it out at home. Then when you sort of get to bigger structures when you really want big hashes and arrays and treat them in a functional way, the best we have so far is Hamster, which implements this technique of this logarithmic shallow trees to give you a hash vector set. Also a bunch of other stuff, like endless lists. There's some cool stuff in there. And it's written in pure Ruby. So under the hood it uses Ruby's arrays and hashes, which is not always great. But they have been working hard on performance and so far, at least for MRI, it's the best we have. So I mean it's a Hamster hash but it's a lot like a Ruby hash. Except you need to write a lot of code because of syntactic limitations of Ruby, but for the rest it's not too surprising. And so you see here that I've derived a new value of the old value but the old one is still around, hasn't changed. It's basically the point. Finally there's closure, which is a project by Charles Nutter, again from JRuby. It's probably the least mature of all of these, and it's kind of cool as a proof of concept. So as an example, closure has this software transactional memory. And so you can use this in your Ruby code to have a couple of variables, basically, that need to change in a coordinated fashion just like you would have transactions in a database. So you can basically achieve what you would otherwise achieve with locks but without having to manage locks yourself. So you can do this and you can roll back and retry if necessary. So yeah, I'm going to skip over this code because it's a bit complicated and it's not that important, but yeah, basically I'm making two lists filling them with 10 different threads and in the end everything's consistent. It's basically what it comes down to. One last jam I'm going to point to so if you know a bit of Haskell or sort of that breed of lazy functional languages, one of the downsides of doing functional programming in Ruby is that because we have optional parentheses, it means that we don't get the benefit of distinguishing between calling a method and referring to the method object. So if you want to get the method object, you have to call dot method past the name of the method we want and so this kind of gets in the way of really this nice higher order functional programming but Funcify does give you that. So here I'm saying AutoCurry which means that if I call this function or this method with less than its total amount of arguments what I get back is basically a lambda which takes an extra argument and then I can compose those just basically functional composition, it's kind of like a reverse pipe and so the result is again a lambda which has composed us two operations and then I can call that again I think this is not where the big benefit of functional programming lies I think doing this kind of stuff can be awkward in Ruby and maybe it's not what we should be focusing on but it's interesting that people are sort of exploring what we can do with this so and basically like why wouldn't we have this in core like just sort of as starting point for discussion why are methods and prox not composable so we can just say okay here's a proc, here's a proc compose them so one comes after the other the result of one goes into the other stuff like that so it's kind of a starting point for discussion right and this is my final slide where I just get around a bit so I think let me take a sip so after I did this talk the last time I got the question do you recommend that we just switch to a functional language or should people stick to a Ruby and do functional stuff in Ruby and so I think we have three options option A is basically do nothing just keep writing the code the way we've always done but I think the world is going to pass us by there are lessons to be learned here and I think we need to start paying attention so the other options are should we start using a functional language and I think if you haven't yet or if you haven't much I do highly recommend it closure is neat because it's dynamic so that's kind of easy if you're kind of afraid of very complex type systems or elixir because it looks a lot like Ruby even though once you get to the details it's very different but so these are definitely nice languages to play around with or if you want something a full type system try Haskell and then come back and see what you've learned there I think the danger is that we've seen with people writing these gems people are already trying to do this stuff but the ecosystem is not there the mind chair is not there and so we're going to start losing people that get disappointed and so I really hope that as individuals, as a community also together with Ruby Core we can think about that Metz in the latest KaiGi apparently said he started thinking and dreaming about Ruby 3 you know should we have persistent data structures in standard libraries should we have functional composition in Ruby Core these are questions and I don't have definite answers I also don't say that any of the code that you've seen here or any of the code in the Happy Lambda is how things should be written but I really hope that we can sort of start this discussion and together figure out how we can take this awesome community and not lose it but sort of take it into the future so that's it Thank you very much we have time for one question if anybody does have a question one question Aaron want us this morning not to allocate too much objects aren't you afraid that if you do this in Ruby you will allocate a lot of objects and make the program very slow yeah so that's a good question and it's sort of the typical first reaction like what about performance I think at a first level yes you tend to allocate more and garbage collect more but that's only on the surface level I think when you really start using this in earnest you're going to start writing programs in a different way and actually by making good use of persistent data structures you can get much more because it does more we saw that the command pattern is used to implement under redo if you use persistent data structures to represent your full state then you just get that for free you get history for free and depending on the type of application you're writing I think the optimizations that become possible under the hood actually might surpass the short-term win of small optimizations so I'm not saying that it will be faster and I think in many cases it might be slower even then measure and ask if you really need to speed it's a very nuanced answer it depends on your application that is all we've got time for questions there so thanks for watching thank you