 Hi everybody. So today I'm going to be discussing the controversial topic of performance optimization in Ruby. And this isn't a common topic of discussion amongst Rubyists. We rather talk about how to make code more beautiful or elegant or readable or testable. These are the things that most Rubyists talk about. The superpowers are really coming in handy this week. I think optimizing for speed has a bad reputation and it has for a long time. So in 1974, Donald Knuth, the grandfather of algorithm analysis, declared that premature optimization is the root of all evil. I suspect most of you are familiar with this quote. The wisdom behind it is that optimizations have a tendency to make our code uglier, more complex, harder to read. And so he says we should only do optimizations when the time is right. So how do we know when it's the right time to optimize? Knuth laid out a couple basic criteria for when to make optimizations. So he said we should look at the other fields of engineering, more established fields of engineering. Software was a very new field at the time in 1974. And so he looked around and said, okay, well, in other established fields of engineering, if you can easily obtain a 12% improvement, then that's worth doing. So it needs to be easily obtained and it needs to be significant. And he sets that bar at 12% looking around at other engineering fields. And I would add a third criterion from Maths, which is that performance optimization should be fun. They should make you happy. A couple of years ago, Katrina Owen gave a talk on therapeutic refactoring. And I think of performance optimizations much in the same way. For me, optimizing performance sort of has this therapeutic effect. And it's very similar, right? So before you refactor code, you want to make sure it's thoroughly tested. Because that way when you're changing things, you can be confident that it's not broken. Same with performance optimizations. And most programmers love to optimize. Maybe even a little bit too much. And so it's okay that Ruby is slow. It just means that we have more opportunities for happy little optimizations. So before we sort of get into what these optimizations are, how to make your Ruby code fast, I want to talk about the different layers of optimization, the different levels of optimization. And these are five. They're not the only five. Maybe there's a few that are in between them or maybe lower level. But this will sort of give you a sense of kind of the landscape, I hope. So typically design optimizations are the ones that have the biggest impact. These are the highest level optimizations. And if you switch to a fundamentally better algorithm or a fundamentally better architecture, you can dramatically improve performance. So replacing an N plus one query with a single query, right? We see this all the time. And Piotra's example yesterday of the clever prime number calculator. This was actually an improved algorithm. So it was a design change to the code that made it faster. But these types of optimizations typically have nothing to do with the Ruby language. They're computer science optimizations. And we can learn about better algorithms. We can learn about better architectures. But they're not specific to Ruby. So that's not what I'm going to be talking about today. What I'm going to be talking about today is source code optimizations. So making changes that improve the performance of the code. Syntactic changes. And specifically changes that apply to the Ruby syntax. I'll be sharing many examples later on. Going one level down from source code optimizations or build optimizations. And you're probably familiar with these. Maybe you've run configure before running make. Or you've compiled Ruby with the dash o flag. These are build time optimizations. So it's you sort of do it once and you're optimizing either for the specific architecture that you're building the code on. Or you're making optimizations for a specific use case. But it's kind of a one-time optimization. So other languages specifically compiled languages can make optimizations at compile time using an ahead-of-time compiler. And of course these optimizations are not possible in Ruby. We can't make compile time optimizations because Ruby does not have an ahead-of-time compiler or does it. So Matt talked yesterday about M Ruby. And M Ruby actually does have an ahead-of-time compiler. As does J Ruby. As does Rubinius. And so MRI is actually the only major implementation of Ruby without an ahead-of-time compiler. Without a byte code compiler. And I think that's a shame. Because there's many optimizations that could be made by the compiler. And the not least of which is just parsing the code in advance. So if you think about how much of the startup cost of a Rails application is just parsing time. Just parsing through all the files that are getting required and figuring out what byte code to generate from that. That alone would be a huge optimization of a ahead-of-time compiler. And then obviously there's runtime optimizations. I'm not going to be talking about these either. These are typically things that Matt or Koichi work on. People who are on the Ruby core team trying to make Ruby faster. And with each new version of Ruby, recently we've been getting about a five or ten percent performance improvement. Which is great. But we don't have to do anything to benefit from these. These are just runtime optimizations. There's also some runtime environment variables you can set. So maybe if you've worked on really high-scale applications you've played around with environment variables like the Ruby GC malloc limit or something like that. And that's great. So yeah, I mean I guess it's good to know about runtime optimization, but typically we're not actually improving the runtime. It's other people who are doing that for us. So going back to Knuse, I think it's important to look at the context around his famous quote. And the reason he believed premature optimizations were evil is because programmers have terrible intuition about how to optimize. And despite the fact that programming the job of a programmer is basically to think like a computer, we're very bad at guessing how a computer, what will be fast and what will be slow, right? And so how do you know? And the answer is you measure. You have to measure and benchmark. And so I'm going to just give a quick introduction on benchmarking and my methodology for this talk, for the beginners in this room. Maybe you'll learn something about benchmarking, how to benchmark, and for the more advanced programmers you can check my methodology to make sure everything's right. So this is the benchmark library that ships with, it's part of the Ruby standard library, so that's sort of cool that Ruby builds in benchmarking for you. And if you want an accurate benchmark, the problem is that you can't just run the code once, right? So in this case, I've defined two methods, one called fast and one called slow. And I say I want to run them 50 times each. But maybe this takes zero microseconds or something like that. In both cases. And I can't really see the difference. So maybe I'll take my N, the number of times I'm running each method and change that from 50 to 50,000. And now the benchmark takes too long to run, right? I'm standing around waiting for five minutes or five hours for the benchmark to finish. And so the solution to this problem is a gem by Evan Phoenix called benchmark IPS. And it basically takes the guesswork out of it, right? The whole problem that Canoes was stating was that we have bad intuition about these numbers, right? About how fast things run and what's fast and what's slow. So benchmark IPS takes the guesswork out of it. The interface is very similar. It's a gem, so you can say gem install benchmark IPS and then require it and use it in a very similar fashion. And it basically flips the problem around. So you don't have to tell it how many iterations you want. What it does is it runs your code for a fixed amount of time, normally five seconds, and then it tells you how many iterations it was able to execute in that amount of time. And it also tells you the standard deviation and some other nice things as well. And so because it's measuring iterations per second, actually bigger is better. You end up getting these, it's very clear which one is better because the bigger number is better. So I sort of like that, that about it as well. Okay. So just to recap, now we have some goals. So can we make source code optimizations at the source code level that give us at least 12% performance improvement without sacrificing readability and that make us happy. Fundamentally, we optimize performance by making the computer do less. So optimizing our program often results in simpler programs. And in all the examples I'm about to show, the code is not only faster, but it's actually simpler in a way. Okay. So the first example, this is a method that takes a block and calls it right away. Doesn't do anything else. Maybe it does some other things, but fundamentally the thing we want to test is if you have a method that takes a block and calls that block, right, it's not passing that block to another method, it's just calling it. How much faster is specifying the block as a parameter to the method and then using block.call versus yielding to the block, right? Because every method has an implicit block as an argument, takes an implicit block, so you don't have to be explicit about it. If you don't want to give it a name, if you're not passing it to anything, you don't have to, you can just yield. And maybe you want to check to see, you know, you could say something, return, unless block given or something like that, right? You might want to have some safety around this. But fundamentally, when we're benchmarking, this is all we care about. What's the difference in cost between block.call and just yielding to the implicit block? Any guesses? Do you think this is 12% faster above Knuth's threshold of 12%? Is it 20% faster? Is it 50% faster? Guesses? 90% faster? It's actually five times faster. My Ruby Hero superpower is fire, by the way, so I've included a lot of fire in these slides. Yeah, it's five times faster. And again, it's not only faster. The code is actually simpler. It's less code. It's fewer characters. It's less typing. It looks nicer. It's easier to read, right? And it also happens to be five times faster. Did the fire effect work? Oh, come on. It's supposed to be fire. Maybe it'll work in future. Do I have to click? No. Okay. Maybe it'll work next time. Okay. So why is it faster? That's a question worth asking, right? It seems like it's doing the same thing. Why is it so much faster? So the code on the left, when you have the explicit block, is equivalent to doing this, right? It's literally calling proc.new. And so it's taking your block, turning it into a proc, instantiating a new proc object, right? So you're allocating memory, creating a new object, and then you're calling it, which is exactly what yield does, right? So again, it's much simpler. And again, this is not an example I just pulled out of thin air. This is an example that lives in your production code. So this is production code that I recently patched that was five times slower than it needed to be. And you can say, okay, maybe this isn't a tight loop or something like that. How many times is this code actually getting called? But my point would be, why should it be five times slower than it needs to for code that is actually worse? Code that's harder to read? Code that's uglier? Shouldn't we have all the benefits of simpler code, including the fact that it's five times faster, right? And I do a lot of framework development. I do a lot of library development. And when you're working on frameworks, you're not consuming your own code. You don't know how other people are going to use it. You know how you use it, and you know how you run your code, but you don't know how other people are going to use it. So maybe you have some method where you're doing something like this and you think, oh, it doesn't matter. I only call this code once, right? This code only gets executed once. It's only invoked once by me, the consumer. But when you take that code and you factor it out into a library and you release that library to the world, you don't know how people are going to use it. Maybe people are calling that code in a tight loop. And in that case, the 5x performance advantage, they're actually going to feel that. It's actually going to make a difference, right? So that's why I think these types of changes, it's important to be aware of. Another example. Oops. Cool. The fireworks on the next one. Okay. So what have we learned? We've learned that blocks are always faster than procs, right? Wrong. So in this case, where we have a range from 1 to 100, and we're basically converting those fixed nums into strings in a block. And there's a little shorthand for doing that called symbol to proc. And symbol to proc is actually faster by 20%. And so the history of this is quite interesting. So symbol to proc, this sort of syntax was not always part of Ruby. It was added to Ruby in Ruby 1.9 and then back ported to Ruby 1.8. But it came out of Rails. It came out of active support. Active support sort of developed this nice syntax, people were using it, they liked it, and it made its way into Rails. And what's interesting is there was actually people who liked the syntax, it was easy to use, everyone agreed, it was easier to read, but it was slow. The active support implementation of this, the Ruby implementation of this was actually slow. And so even though active support exposed this function and let you use this function, Rails internally rewrote all of the instances of symbol to proc into block syntax, right? For performance reasons. They were optimizing it, micro-optimizing it to use the block syntax. But then after this became part of the Ruby syntax, those optimizations were actually built into the language. And so symbol to proc, once it became part of Ruby, once it was out of active support and part of Ruby, symbol to proc actually became faster. But Rails never went through and updated the instances that they had converted from symbol to proc, the thing they invented, to the block syntax. They never converted it back to symbol to proc, even though symbol to proc was now faster. And so I recently submitted a pull request to Rails. It is still open. And it has been like plus one, then I think it will be merged as soon as 4.2 is released, 4.2 is in beta now and they don't want to make any changes to it. And this is a big change. If you look in the corner, it's changing 150 lines of code, 150 instances of using blocks when you could actually use symbol to proc. And in some cases it actually reduces the number of lines of code. So I have this example of the finalize method. This is in the router, and specifically the route reloader. And finalize goes from being this three line method to being a one line method using symbol to proc. So we're simplifying, we're reducing the number of lines of code in 150 places in Rails code that we all use. Some of this is active support code, right? So it very likely could be in a tight loop. It could be code that you're using. And Rails invented this whole concept of symbol to proc. They weren't using it for performance reasons back in the day when that was true. It's no longer true. But it's only getting updated now. This pull request has not been merged yet. So maybe Rails 4.3 or 5.0 will finally get these advantages. So this is, again, pretty surprising to me. You think like, okay, yeah, obviously I should use flat map, right? Why should I do a map and then flatten one? That's exactly what flat map does. That's why flat map exists, right? So for those who don't know, flat map is a method that does a map, doesn't matter what you do in the middle, and then it flattens the result. This is a very useful method. It's in functional programming. This is one of the core methods that people use. And Ruby has it as well. And there's actually an optimization, right? So when you do a map and then a flatten, you're basically doing two different passes through the array, or whatever the enumerable is, right? So first you're going through, and you're mapping, and then once you finish mapping, you go through the array and flatten it. And flat map is basically an optimization where you don't have to do one or the other. You do it all, or you don't have to do it in two passes. You can both map and flatten in one step. So using flat map is four and a half times faster than two passes. And again, this is like just a simple example, but it sort of depends a lot on what you're doing. But again, in Rails. So this is like, why wouldn't everyone just use flat map? It's in the language. Everybody knows about flat map. This is the Rails source code. And again, there were five instances of flat map being used, or sorry, of map and then flatten one being used instead of flat map. So for no reason, this code is not harder to read, if anything. I think it's easier to read, it's more concise, it's more expressive, and it's also four and a half times faster. So, oh, and you also get the benefit of Jose. If you submit these pull requests, you get the benefit of Jose Valium giving you five heart comments, which also makes me happy. Okay. So this sort of got me thinking. I'm curious to see the actual flat map implementation, right? You're only doing one pass. It's more efficient. So what does that one pass look like? And so I open up the Rubinius source code to see how this optimization actually works, how flat map actually works under the hood, and to my surprise, the implementation of flat map in Rubinius is first you map, and then you flatten one. And I bring this up not to, like I'm calling out the specific projects, Rails and Rubinius, not to shame these people publicly, although I guess I'm doing that. Sorry, Brian. Great talk. I do it because I really believe Brian knows more about Ruby and Ruby optimization, specifically, than almost anyone in this room. And he wrote this code. He made this mistake. And I suspect that if you go through your code, looking for examples like this, you will find many similar examples where, again, you're doing two passes when you could be doing one or something like this. This is actually a design optimization, but I mention it here just as a sort of follow-on to the source code optimization because I think it's easy, because I think it's relevant and interesting. And here, yeah, like I'm actually changing the algorithm. I'm doing one pass instead of two. And the code actually isn't shorter, right? So you could argue that maybe this is harder to read, but it's low-level code, right? It's code that's in Rubinius. And so I think doing those type of design optimizations, maybe you don't achieve the goal of the code is actually easier to read at the end, but hopefully it's low-level code and it's not going to change very much. So this is sort of the first example of a bunch that are related to mutability. And so in Ruby, we have these methods that end in exclamation point, bang methods, sometimes they're called, two different versions of a method, some that will basically dupe the object, copy the object, and then perform the mutation on the copy versus the exclamation point method, which will just perform the mutation on the actual object that you call the method on. And in this case, every time you say, right, through this enumerable, maybe this is an array with 100 elements, each, then 100 times, you're duping h, whatever h is. And because you're merging things into h, h keeps growing and growing and growing with each iteration. So every single time, you're making h bigger and then you're making a copy of it, making h bigger and making a copy of it over and over and over again. And think of all the allocations you're doing there, right? If n is 100 here, if the size of the enum is 100 elements, you're doing 100 allocations to do this, and then you're just returning the results, right? So it doesn't necessarily, you don't need those copies, or at least some of the time you don't, right? Some of the time it's okay to modify the actual object. The object is just temporary in this case, right? It's not like h existed before it's a block, right? It's a block variable. Okay, so here we're just mutating the existing memory that we already have. I don't think you do this in functional languages, but it's much faster, like one of the benefits that we have in Ruby is we can actually overwrite objects, things are mutable, objects are mutable, and arrays are mutable, hashes are mutable, strings are mutable, and we can actually use that to our advantage, right? We can actually use that for a performance optimization. So using merge with the exclamation point here is significantly faster, three times actually, right? And again, this is going to vary based on the size of your enum, a bigger enum, you're going to get much worse performance, but I think in this example in my benchmark I had an enum of size 100 or something reasonable, right? It wasn't crazy, and just with a reasonable size enum you get 3x performance. Might have even been 10. And so you think, okay, that's good, now I'm always going to use merge with the exclamation point in this situation. It turns out you can actually do better than that. So in this case, okay, the first example here, now the slow example was the fast example in the previous slide, and what's the optimization we're doing here? Well, before we do the merge, we're allocating a new hash, right? And then we're immediately merging that hash into the existing hash, right? Into H. And we don't need to do that, right? We can just set a specific value for a specific key without creating a hash, allocating another object. And so if you do that, you get another two times speedup. So this implementation is actually six times faster, six and a half times faster in fact than the original implementation using merge without the exclamation point. So again, it's pretty significant. Okay, fetching a value out of a hash, this is something we do all the time, most of the time we do it, we use the square brackets. But in some cases, we use the fetch message, method, and one of the nice things about fetch is that you can provide a default value. So it takes a second argument that allows you to say, if the thing is not there, then always return this default. And you can pass that either as a second argument to the method or you can pass it as a block. And again, you think, oh, maybe the block implementation will be slower, right? Because you're making a block or something like that, maybe you're instantiating a proc or something. But the block implementation in most cases, specifically in the case where the object is found, will be significantly faster because that block is only called in the case where it's not found. So that 0 to 9 to a, allocating an array of nine elements, nine fix nums from 0 to 9, or 10 fix nums from 0 to 9, that only happens if bar, the key bar is not found. And in this case, the key bar is found. So when you have, when you pass it as a block instead of as the second argument, it basically will own, it's lazy, right? The block will only get called if the element bar is not found. If the element bar is found, you don't need to allocate that memory. You don't need to make that array, right? But if you pass it as an argument, then the array gets allocated in advance and then passed into the fetch method. So again, it's a 2x performance improvement. And again, depends very heavily on what's in that block or what the second argument is. But if you have something that's very computationally expensive, you should be aware of this, right? And you know, you can sort of debate which one is easier to read. I think they're sort of six of one, half a dozen of the other. But if you have something very expensive as your second argument here, you shouldn't do that every single time you fetch, you should only do that when you need to. So I see this all the time. This is actually one of the most common ones I see. For some reason, Ruby programmers love to put the letter G before the sub-method. I don't know why that is. And a lot of times you only need to do one substitution. So in this example, there's many like it, you're substituting HTTP for HTTPS. And typically you only need to do it once, right? The protocol only appears once in the URI. And it appears at the very beginning, right? It should be the first seven characters of the string. And so once you do that, once you find it, you don't need to keep scanning the string, looking to globally replace more instances of HTTP colon slash slash, right? You're done. You replaced it. Move on with your life. And in this case, it's also shorter, right? It's nice. You're saying what you do. Maybe there's HTTP that appears somewhere else in the URL, right? Maybe there's like a parameter that has a URL in it. You don't want to replace it there, right? You just want to rewrite the base URL. And so, yeah, in this case, better in every way. And depending on, this depends heavily on the length of your string. Here I use barruco.org because it fits on the slide in a nice big font. And because it's the website of this lovely conference. But it's a relatively short string. You wouldn't think it would make that big of a difference. Maybe if the string was 100 characters, then you'd be paying a big penalty for scanning through the rest of the string. But even with a really short string, like barruco.org, it's 50% faster, right? And that's only going to go up as the string gets longer, as the URL gets longer. Okay, so I just told you about the virtues of sub in favor of G sub. There's also a method called TR. And I, again, often see Ruby programmers using G sub when TR would work just as well, if not better. It's a shorter method and it doesn't do the exact same thing. But in some cases, you can use it as a replacement. So in this case, TR, which I think stands for transform, is basically going to take things on the left side and replace them with things on the right. And so in this case, where basically this is like, maybe you would see this code in a method that takes the title of like a blog post or something like that and turns it into a slug. And we want to replace the spaces with underscores. Maybe down case, I didn't do some other things as well. Here, the, if you use TR instead of G sub, it's dramatically faster, right? Five times faster, TR instead of G sub. Why not use it? It's shorter, it's faster, it's less typing, it's more elegant, it's more expressive of what you're doing. And it's five times faster. Parallel versus sequential assignment. So personally, the code on the left is shorter, right? It's fewer lines of code, but, and it's a very short example, right? I'm just assigning A and B to one and two. But I find this very hard to read, especially when the variable names are long and the values on the right that are being assigned are also long and complex, right? You have these two different expressions with a comma and an equal sign in the middle and two other things and you have to figure out what maps to what. And I think the one place where using parallel assignment is justifiable in Ruby is when you're swapping values. So if you want to say A comma B equals B comma A, fine. That's great. It saves you from creating a temp variable. And because of that, it actually negates the performance benefit. But if you care about performance and readability, I would say don't use parallel assignment, except in this one case, right? And you get a 40% speed up. I think this is sort of widely recognized as a Ruby best practice, not using exceptions for control flow. There's a lot of reasons why it's a bad thing to do. In fact, Abdi Grimm wrote a book about it called Exceptional Ruby, which I'd encourage you to read, not just about this, about many other things, but he makes this point, which is that you should not use exceptions for control flow. One of the many reasons you should not use exceptions for control flow is because it's much slower, right? Because you're basically, you're raising an exception and the internally what's happening with Ruby, then you have to rescue it and it's terrible, right? So this code, if you just check whether the method exists first, you can, and again, you can't always do this. Some exceptions, either you need to rescue or there's some other way, some other way around it. But in cases like this, where you're expecting a no method error, where you're rescuing a no method error, we have something in Ruby to handle that, it's called respond to, tells you if it responds in advance, check for that, and then do your exception handling that way. There's many other cases of this, right? And it's over 10 times faster. So really significant performance improvement. I see things like this, like all the time in Ruby, you see like a long line and then it ends with something like rescue nil or rescue empty array or something like that. And if you're running into that exception and hitting that rescue significantly, you're paying a huge cost for that. Another example. So array with each versus while loops. So if you say each with index on some array, it's actually significantly slower than doing a while loop. And, you know, you can sort of debate over which one's easier to read. But in cases where you really care about performance and you, you really want to optimize this, it's 80% faster to use a while loop than each with index. Okay. So we've gone through a bunch of examples of how to how to optimize performance in Ruby. And these are all basically very simple ways things you can do that not only make the code significantly faster, more than 12%, right? More than canoes threshold of 12% faster. They also make your code significantly simpler, easier to read. And I would argue better. So what does this mean? Like what is where does this leave us? So I hope I've given you some ideas for how to benchmark and optimize your code. But not all optimizations come in this form. There are some optimizations that actually make your code uglier and harder to read and arguably worse, but they make it faster, right? And I think these optimizations, these type of optimizations, source code optimizations specifically, could be done in a Ruby compiler in a Ruby ahead of time by, by code compiler similar to the one we have an M Ruby similar to the one we have in J Ruby similar to the one we have in Rubinius. But I think it would be great if we had one for MRI. And I think if we had that, it would let us focus less on these low level source code optimizations and more on solving higher level, more interesting design problems and architecture problems. So my wish for the future would be that that Ruby basically has an ahead of time compiler. So we don't have to think about this stuff and it just does it for us. Oh, there's some credits. Should I roll the credits? Okay, so, tada. These slides were beautifully illustrated by Rebecca Green. I, I think it would have been a much less interesting presentation without her. So many thanks to her. Also, Don Canoes, the grandfather of algorithms, I think, obviously contributed to this talk. And of course, Tomats, who, who is played today by Aaron Patterson's cat, Gorby Puff. You want, Matt can take a photo. Special thanks to Aaron Patterson for letting me use his cat. Also to the Ruby rogues parlay mailing list. So I submitted this as an idea for a talk months ago and submitted all these examples and got amazing feedback from members of their community there. The one who I would call out who gave really great feedback and who does amazing work in performance optimization and who's some of his ideas I stole for this talk. He gave a similar talk at Golden Gate Ruby conference last year at Sam Saffron. So yeah, he's doing an amazing job improving performance in the Ruby community. So yeah, thanks to him. He has a great blog as well. Amon Gupta has done some great work on optimizing recent versions of Ruby specifically 2.1 and 2.2, which will be released soon. Yeah, Don Knuth, Matt, thank you, Koichi-san and just the organizers of this conference for giving me the opportunity to speak here today. Thanks. There's also a couple of outtakes. So there were some illustrations that I couldn't fit into the couldn't fit in, but wanted to wanted to include anyway. So this is one of them. Great. Okay, that's it. Thanks. Great presentation and great job by Rebecca. I'd love to see the slides again. Where you talk about hash fetch, isn't it even faster if you just use double pipe or ore? Did you thought about that or did you do benchmarks about that? The question was with hash fetch. Could we have the slides back up, please? In the case that you said it's faster, you still have to create the blog, whereas if you use double pipe, that's not the case. Thanks. Yeah. Yeah, that's a good idea. What's that? It doesn't work for nil and false. That's right. So but in cases where you're confident that you won't have nil and false, maybe you're sure there'll be a null object or something instead, then yeah, you could do that and I suspect it would be faster, but I shouldn't trust my intuition. I should benchmark. Thanks. I will do that after. There's another question down here and then this one over here afterwards. Lovely presentation. Do you know if there's been any tryouts with adding annotations for inlining so that you can have shorter methods and say inline those if possible and any of the ruby mutations doing that? I'm not I'm pretty sure I've heard Charlie and Tom on the JRuby team talk about doing this and specific optimizations for the JVM. Not sure, Matt, maybe you could answer about whether MRI does sort of runtime inlining or anything like that. Not yet, but looking into it, it sounds like. Yeah, but yeah, I mean that like this would be great. Like the real point I want to drive home from this talk is that we really shouldn't have to be worrying about these things. Ideally, a sufficiently smart compiler would be able to make all these optimizations for us and we could write the code however we want. So in these cases, the code happens to be simpler and more beautiful and so yeah, it's also faster. Let's just write it that way. You have no excuse not to but there's some cases where the sort of inline code or unwound code is actually not as readable, not as concise, not as beautiful and because we're Rubyists, we care about this sort of stuff. I care about these sort of stuff, right? So there's trade-offs and in the case where there's trade-offs, yeah, it would be great if the compiler could optimize it for us. I think I'm here. Hi, can you open this slide with a hash merge? Which one? There was a couple. Yeah, before I think. Yeah, oh, yeah. No, not this one. All right. Yeah, so basically, I was thinking about the previous one. Anyway, so there are these two operations. One is mutating the object. The second one isn't. Yeah, let me go. So basically, the faster version which mutates the object is is less safe because it can cause some nasty bugs, right? So maybe maybe. No, I would actually argue that it can't because if you look at where the H, the hash is getting created, it's local to the block. It's a block local. Oh, yeah. OK, well, in this case, yeah, but generally comparing these two methods. Yeah, I mean, if you want me to say that mutability is potentially more dangerous. All right. Yeah. So but anyway, maybe you've heard about the gem called Hamster. I haven't. Tell me about it. So this gem provides immutable persistent data structures for Ruby inspired by, for example, closure ones. And this can be a solution in some cases to to have a quite well performing. You know, safe, well performing and safe, yeah. Yeah, I think it is worth benchmarking to. I'll do that. Christian, it's not prepared. On the with index and while loop, is there anything inherently that that means that this has to be slower? Because I really like the each with index syntax much better in most cases. Yeah, I do, too, actually. I almost didn't. Now I'm paying the price for all those fire animations. Lesson learned fire, not such a good super power after all. OK. Yeah, I agree. I mean, clearly it's fewer lines of code and nicer. So yeah, like this is this is sort of my point. I included the sample. It was the last example. And I included it at the end to basically as a case in point that optimizations aren't always simpler. They're not always it's not always more concise. It's not always less code. And I would like to have an ahead of time compiler for Ruby that could make or even adjust in time compiler for Ruby that could make these optimizations for me so that I didn't have to. So these would be equal and I could write it any way I want. Any more questions? It's one up there. Probably time for one more if anyone else has got one after this. Now, this question is about the control flow. Do you have any opinions or even data about using try and try and catch? No, throw and catch for control flow in Ruby. You know that you can throw the symbol and catch it somewhere else. I don't have opinions about that, but I would be happy to benchmark it and tell you what the results are. Yeah, that would be cool. Thank you. Do you know anything about that or is there a reason why you're asking? No, I don't. I think it's kind of weird, and not so often used feature in Ruby, but I think it's an interesting one, nevertheless. I coded in Ruby a long time before I realized it was there. Yep, it's a cool feature. Worth benchmarking. I suspect it's not fast, but it's worth benchmarking. One more question? Is that a hand at the back or is someone stretching? I can't quite tell. I'm going to say it's a hand. He's got to ask a question anyway, though. It was him stretching. Brilliant. Great. Does anyone actually have a question or is everyone just going to stretch and just troll us? Great. All right, let's take a break. We're back in half an hour. If you spread yourselves out between the top and bottom again, let's give Eric one last round of applause.