 No period that My kid is the mother and so OMG OMG Happy Thursday everybody. Welcome to Ruby comp 10. Okay. I won't want so much I'm honored to be here. I'm honored to be yet again the first person to welcome you to Ruby comp. Thank you Great to open this I Love that it's Ruby comp X because I think of it as the extreme Ruby comp So My name is an Aaron Patterson. I work for a company called AT&T interactive I am paid to work on open source software every day We use a lot of rails at work So mainly I work on rails because if I improve rails then hopefully the applications within our company will Improve so that is what I do my Twitter address is center love my email address is this and If you have a phone that can read QR codes, you can get my v-card here I also I also want to mention that I'm using I'm using a v-mode to control my machine and I took this trick from Yugui at Ruby kaiji, so I encourage all of you to attend Ruby kaiji next year as I hear it's the last Right is that right? Yes the last Ruby kaiji. So please go I'm a Ruby committer. I'm also a rails committer and people ask me. How did I become a committer to both projects? So I want to tell you guys how so that you all can do it too because I think more participation is better And for Ruby the process was very simple basically I went saw Matt's and I said I got it real close. I said hello I went in for the kids Straight in for the kids just the rest of your life. It's it can also work great with Ruby comms I want to give a talk put this slide in because Every time I give a talk. I'm actually insanely nervous. I'm very nervous up here and Friend of mine told me, you know when you're on stage just think about You know what would Freddie Mercury do? So I put this up here remind myself, you know, think about that and just calm down. So Who's doing the Ruby comm 5k tomorrow, so a few people. Okay, awesome. I Signed up for it, but I didn't know what it meant So then I just read it as you know, Ruby comm 5,000 For this horrible mistake I immediately started a rigorous diet of These are these are corn dogs, but rather than corn dogs. It's actually a Jimmy Dean sausage that's wrapped in a pancake I just started eating those and I trained very very hard and I Took a video of the training. I want to share it with you guys So I'm gonna dim the lights up here and so we can see a little bit better. Oh My god, we're gonna talk about performance today And I want to do that in story form and I want to use that as our motivation through this process we're gonna look at some tips and tricks for Performance in your Ruby code and I also want to move out along with real-world examples of things that I actually used inside of Things I use inside of error So we'll talk about theory and tools, but we'll also do application of it And a little bit a little bit about arrow. What what is it when I encountered the project? It was a relational algebra library and I didn't know what this meant So I looked it up and read about relational algebra. I studied about it I still didn't understand what it meant even after I read all this stuff I knew what relational algebra was, but I didn't understand how it related to Ruby necessarily and I found that the main reason for this library was sequel generation. So I Reroll it and I'm going to describe to you what the library is today And as we go through this presentation, we'll examine what it used to be and how it got to the point where it is today today all it does is AST manipulation and What that means is it contains a tree data structure and it knows how to manipulate that tree data structure That's all it does That's one other component AST translation So it takes that tree data structure and it can translate it into something else so to walk this AST and then turn it into something else and The main thing that it turns, you know, the main thing it accomplishes now is turning this AST into a sequel statement But it's not limited to sequel statements right now the only translators we have generated sequel statements But you can traverse this tree and write anything you want to really We're going to talk about its relationship with Rails and I apologize for mentioning rails so much at a Ruby conference, but I work on this a lot, so We'll talk about it The way it the way it's related to rails is when you make calls into active record You may make several calls like where selects whatever you call down into active record and eventually at some point You actually need records from the database you tell active record you need those records from the database Active record then goes down to Errol and says hey the user wanted these particular things Errol Reduces the sequel AST for those things that you want it generates the sequel statement and then hands the sequel statement back off the active record Active record then queries the database and returns the results that you need Yes So how did I get started on this project? I told you that I work for AT&T and I'm paying to work on open source And I wanted to work on rails for the benefit of our company so there's a feature that I've wanted to add to rails for a very long time and that is prepared statement capture and We'll actually have that feature in rails 3.1 and I can talk to you if you come up and talk to me about it I'll tell you about the feature later, but that's not this talk so In order to add this to active record Deeper understanding was required at active record. So I started diving in Fixing bugs going through the lighthouse the lighthouse ticket tracker and fixing bugs in active record And I ran across one that said active record is five times slower than in rails two three five This is before rails three was was released and you can go read up on the ticket here And I thought to myself five times slower up really How's that possible and it it is possible it really was five times slower So I figured okay, I'll look into this and try and figure out what's wrong. I mean what could possibly go wrong right why you know what could possibly go wrong to be looking into this so motivation Why do we care about speed? We all know that Ruby can't scale and rails can't scale and yet we're all rubies, right? We use Ruby anyway, so why do we even care as a tangent? I want to show you I discovered the I've discovered the technique for scaling Ruby it goes like this very simple like this Look at that scale scale is very beautiful now now the thing is The difference is when you the difference between Ruby and say Java is when you zoom in you scale Java It doesn't pixelate like this I Was the main difference but You know, I'm asking you all why why do you want to make your code faster and really I'm just rolling you You know Ruby Ruby isn't super fast We can write faster code and see or whatever but but usually the slow code is linked to poor code So if we identify bits that are slow we can find bad code in our system and get rid of When should I make my code fast? easy answer to this when when it isn't fast enough But then the question is what is fast enough? Whenever I think about this, I think well do people notice it and What are you comparing it to? In my mind fast enough means that it finishes in a reasonable amount of time and the important part here is that? reasonable amount of time is subjective You shouldn't spend your day is focusing on speeding up some method that nobody uses right? So what coach do you improve? Only the code that matters And I'm telling you all these things, but really I don't want you to believe me. I Really don't I want you to think critically and go out and look at this stuff and analyze it for yourself So We're looking at error. It's too slow. How do we figure out? What's too slow? Well, we don't even know that we're looking at arrow yet. We just know active record is five times slower How do we figure out what the problem is? We need to find this bad code. We even know what to measure So let's take a look at our call stack We know that our call stack looks something like this somebody says post I'll find one We go down into arrow, which we're not sure what this code is. We can ignore it for now it's it's outside of our scope we're only thinking about rails we're looking at the rail source code and I told you arrow feeds the sequel in does fine by sequel then it goes down into execute down into logs The log is the very I guess I guess I should invert this log is actually the top of our stack so We know there's not really much code between fine and arrow, but we need to narrow down our problem So what I did is I looked at these three methods fine by sequel execute and law and the ticket complained about work per time They were trying to perform some amount of work, and it took too long performance degraded between rails 3.0 and The current bird or between two three five the current So we need to figure out what had degraded we need to benchmark and when we're dealing with when we're dealing with Performance in our code. We have two enemies our enemies are time and space but For performance We need to reduce we need to reduce certain things We need to reduce method calls branching and looping and we need to we need to reduce objects All of these things will help out our time and space requirements Well, what I think is interesting is that for clean code the things to reduce are exactly the same as the things that we need to Reduce for performance Therefore clean code equals performing code and another thing that is very important when we're going into this process of discovery Is that measurement is paramount? We don't measure this stuff. We don't know how much we've improved recently recently we had Google summer code Google summer code one of the students rewrote some of our libraries our libraries at sea and I asked well are there benchmarks for it and the students said no it should be faster because it's written at sea and We all know that it's going to be faster when it's written and see but who cares unless we know how how much faster it is right So the way we can find this stuff is through a couple tools one of them is called benchmark This comes with Ruby you can use it like this. We have a Fibonacci a Fibonacci sequence Here we we report a benchmark on it and it looks like this when it's output and The numbers the numbers break down like this. We have zero amount of Spent in system, so we don't spend any time making system calls. We're spending all of our time in user land doing computations But this benchmark isn't very helpful We just know that it took some amount of time to do this Fibonacci sequence You know some number of times it's it doesn't give us much information about how this Fibonacci sequence was Generating we don't know much about it. So what we need to do is we need to actually benchmark this over Increasing iterations so that we can better understand the algorithm behind this that implements this Fibonacci sequence So we need to increase the number of times that we that we Call a Fibonacci sequence and then plot them So we get numbers that come out like this and when we plot them it looks like this And we can see that the time is linear so we're increasing at a linear amount of time But writing this code is kind of a pain back here. So my new favorite tool now is main test benchmark Is this released yet Ryan? Yes, it's released There may there may be a beta gem, but you can use it. It's very easy to use. Here's the same Here is the same benchmark written using the main test benchmark. So we have this Fibonacci sequence and The difference is back here when we were using benchmark from Ruby standard library look like this But now we can do this we can just say a certain performance linear and it does that iterations for us and what's even better is that It asserts that the it asserts that the growth was linear Right. So if somebody comes in and changes your Fibonacci sequence such that it grows at n squared or whatever your test will fit It's not allowed This this doesn't mean to say that your function can't get slower because you can get slower and remain linear But this does keep the algorithm from going to some sort of exponential crazy thing the output looks like this and it's it looks kind of weird, but the reason is because it's half the limited and Ryan and I work together on this because I was doing all these benchmark things And it was really a pain to get into the graph into numbers that I could graph it So he made this benchmarking system so that basically you just take this copy it and then paste it into your spreadsheet program And you can get output like this So we have some tools for benchmarking and now we need to write our benchmarks. We benchmark find by SQL we benchmark Execute and we also benchmark blog and We get the results from active records for real beta. We see if they look like this our purple line purple line is the benchmark blue is execute and the upper one is find by c4 and in rails 2 3x it looks like this and I Want to lay these on top of each other, but there would be six lines So it's kind of hard to read so I just laid the two log lines on top of each other We can see that the blue line the blue upper line is from rails 3.0 the the lower yellow line is from 2 3 and What's interesting is if we look at the delta of this the change between these two lines The Delta and find by SQL the Delta and execute and the Delta and log all people But since they're all dependent on each other We know that Delta of execute minus Delta log is zero which means that the changes were in the log statement We know that the performance degradation happened in the log statement, so we can go analyze that more closely And to do this we need to use method couple analysis So what I used was perf tools RV and if you're in a monstalk you learned a little bit about this, so I'm not going to Belabor the topic, but you run it like this and the important part is we get a CPU profile out here and For rails 3.0 beta. We get a graph that looks like this. I know it's totally unreadable. We're going to zoom in and We get we see the largest boxes look like this and We don't know too much about what's going on We know we're spending a lot of time in log and spending a lot of time in benchmark, but nothing's really popping out Text the text output looks like this still nothing nothing is popping out of us at 2 3 stable We see a performance graph that looks like this and we zoom in and we see about the same thing Now a mon mentioned that perf tools RV is a sampling a sampling profiler And what that means is during the function actually during your method execution? It'll sample and see what method here is being called right now and What's interesting about that is it means that if you run some method a thousand times? Perf tools won't tell you that it was run a thousand times It just tells you the percentage of times that it's sampled and you're inside that method Okay, so that's why we see these percentages here and why the percentages and perf tools are so important But if we want to see actual number of method calls, I use ruby-crawl The way I did that I use use it like this you put your code that you want to profile inside this Inside this block and then you just print out a report now I ran this for n equals 1,000 on rails 3.0 beta. I got out like this two three stable out like this and and Things changed so much between rails 2 3 and rails 3.0 that there wasn't much You know there were many different methods between the two profiles So what I did then was take a look at the methods that were in common Most of the other different methods we weren't spending much time The methods that we had in common were time now and time allocate what was interesting is that in 3.0 beta We were making 4,000 calls time dot now for every 1,000 iterations. We're in 2 3 stable We're making 2,000 so we had double the number of calls the time dot now in 3.0 So I fixed that refactored it made it so the time dot now was only called twice So it's the same number of times as in 2 3 2 3 stable and I thought wow, it's all fixed and then a few hours later It's better, but still two times slower so I Caught out these three methods from the staff We knew that post-op fine was it hadn't changed much between the two versions So the only thing really left in this equation was arrow and on a side note This is the time when Ryan told me I needed to rewrite arrow. I Compliant to him about this stuff every day and this was this was when he said you should rewrite it But I didn't I chose to make superficial improvements I want to talk about superficial superficial improvements to your system Superficial improvements to your Ruby system are when you have limited domain or system knowledge And they usually involve VM tricks and you get to see results quickly But I believe that these results paper off over time So at first you can look at your code you look at this code And you can rewrite things in different forms to take advantage of the virtual machine that you're running on So you can result in faster code, but these sort of low-hanging fruit taper off over time So you can't get as much benefit for the amount of time that you put into it So I want to look at some of these look at some of these Superficial improvements and then discuss them and say why they're faster. The first one is the adder adder accessors This code we have some deaf some attribute returns the attribute functionally equivalent code and Some of you may be surprised to know that the adder accessor the adder reader is actually much faster than the method form and The reason behind this is because the way a Ruby works is that walks an ASD of code to execute and As it's walking this ASD it needs to set up certain things if we look at the C code for when we walk over an adder reader We don't actually do that much We go to the adder reader node and we just pull a value basically do a hash lookup on that on that Adder reader, but when we do a method called the C code looks like this And I'm not even going to show you all of it it looks like this and we finally we do a bit more work and we finally walk into this function called VM setup method and VM setup method does actually does a lot of work. It checks for stack overflows and pushes a stack frame on it copies your arguments so we're doing a lot more work in this than the adder readers were doing and One thing that's important to note is that this particular optimization is on all Ruby implementations So it's better for you to write an adder reader than do the deaf the method version A lot of times I see code like this where we'll say some attribute question mark And what I'd like to do instead is just change that to an alias The alias actually doesn't copy so we get those same benefits of the speed of an adder reader But we still get our predicate method next thing I want to look at is half versus inject And I'm looking at this because I don't like injects. I see it abused a lot We see this pattern very often. How many of you have written this? Yes, shame on you shame on all of you Yes, you're all you're all fired We can read we can rewrite that as this hash has a method called square vertices and we'll do exactly the same thing we can rewrite this as a map a map and a hash and So I want to benchmark between these two and inject versus a hash a Hash and a map and it turns out doing the inject is slower than the hash in the map and The reason is because you're doing it wrong No, actually actually to be quite honest There's a few possible reasons why you're doing this, but we're actually doing a lot of work in this code We're doing a hash set inside the inject form. We're actually doing a half set We're returning this hash the inject needs to look at the return value of the block and then pass that on to the next iteration We're in our more functional style one. We're creating a bunch of off We're creating a bunch of array objects and then passing them into the hash now Just for full disclosure in I'm pretty sure Ryan will hate me for this. I noticed some strangeness When I was performing these benchmarks and that was that I benchmarked doing just a naked array iteration and a naked inject iteration and These are doing approximately the same amount of work, but when I plotted it Inject was far slower and I don't know why I really don't know why I looked at the C code and I Can't tell you why one is so much slower than the other. Maybe Matt's can't or this is a bug. I don't know anyway The reason I bring this up is because I want you all to investigate this stuff do research on your own so tangent So that I can correct all of you who are doing it wrong when you should use inject You should use inject when one calculation depends on the previous So in this example The return value of our block the next iteration doesn't care it doesn't care about the calculation that you did in here You're just passing a hatch What's the point? Here is a better usage We need to do a constant look up for example. We're doing costs, you know looking up a cost We have a string that represents a constant we need to look up each iteration through inject Depends on the calculation of the previous iteration This is when you should use it so Next up is croc activation a lambda versus just a method call The results for this a lambda takes much longer than a method called us And why is that the reason is because the lambda needs to remember its context? Ruby needs to store off the variables that were available to that lambda and as soon as you call that lambda It needs to recall the environment within which that was within which it was created a Method doesn't have that type of overhead so a lot of times I see code where we say you know is this a prop is it a Is it a prop and then we call call on it? But really what I wonder is do we care that it's a lambda? Do you really care that that code that you're writing is a lambda really or do you just care that it responds to call? If you just care that a response to call you could rewrite your lambdas and class like this And that means you can actually reuse that though use inheritance or use models and mixins and whatnot It's even easier to test in my opinion But we can talk about that later. So define that This code define method versus a class of alvers is a regular method we see that the class of al is about about the same speed as a normal method normal method and Define method is much longer. And the reason is because define method uses a block We're paying that proc activation fee that we talked about in the last few slides Explicit lock frame, which will make surprise you these two methods. Which one is going to be slower? Explicit. Yes, insanely slower. And the reason is because we're actually creating a proc object so we're paying a prop we're paying a cost of creating a proc object and Garbage collecting that proc object now Sometimes we need a prop object and there's a way to get around this We conditionally need a prop object and I hate this code that I would have to show you But it's possible and I want to tell you about it prop.new How many of you know what prop.new does Yeah So prop.new when you call prop.new without a block It uses the block that was passed into the method So if you call this block this second if you call this a block without the block given part if you call this you'll Actually get an argument error and it's because it wants to use the block that was passed this code will output In the first form down here the code will output high and in the second form it will do nothing But if we didn't do that block given check We'll actually get an argument error Next one I want to bring up a symbol to prop Symbol to prop. I'm sure everybody is a symbol of prop. Symbol to prop is much slower than just using a block Now the interesting thing is that if we look at it in 1.9. That's not true So you need to know your audience when you're doing these Superficial when you're doing these superficial performance improvements. You need to know who you're targeting now when I'm writing library code I'm more apt to use the block form and the reason is because I know that many people will be using Ruby 1.8 and the delta between Did I mention that the symbol of prop is actually faster in 1.9 than the block form Now but the thing is the delta between them in 1.9. It's very tiny, but if you look at the delta in 1.8 It's huge. So I tend to use the block form. So knowing your audience is very important Return value capturing I see a lot of methods like this And I don't really have anything against it so much as I wonder how many times is this method called Because every time that method is called we pay that or equals price we check to see if that that instance variable has been set and I just look at this method and wonder well can the caller catch the return value Do we really need to call this method over and over again? so We've gone through and we've made our improvements arrow we've taken all the superficial performance improvements and applied them and we're feeling better feeling great and We plot we plot the values and before the yellow line is before the blue line We've made these superficial improvements and we're getting much better But the purple line the purple line is where we need to be the purple line is rails to three. So what do we do? What do we do? We have to go deeper This is this is my best my best. What's his name? a Guy from inception. I'm sorry. I'm still nervous. What would Freddy agree to do? so so What I did is I started examining the source code the source code of Errol and I found that you know, we had many classes where we included this module called relation and we had 12 classes that defined the method bind and When I say to you the word relation or when I say to you the word bind if I ask if I ask five different engineers What these two words mean? I'm gonna get five different answers. The reason is we don't have much context It's difficult to understand what these words mean It was infuriating to me to go through this code and find out that everything is a relationship If it doesn't include relation it inherits from a class that includes relation Everything responds to buying everything responded to buy and Everything had a relation Everything had a relation everything responded to bind and everything was a relation and because of Ruby's dynamic typing I didn't know what was going on anyway So I understood from the code that buying was being recursively called on relation But I just kept staring at it and going how does it work? Tools are being I took a look at what was going on I started benchmarking this and I found that we were getting lots of calls to class new And we were spending a lot of time in the garbage collector Which means to me that we're creating lots of objects. We're creating lots of objects and throwing them away So these objects were getting created in the garbage collector was coming on clean What we really needed to do was we need to do data structure analysis So I didn't understand how these data structures worked. I need to understand how they work. So I Turned to one of my favorite tools graph is You don't know what this tool is you should go to graph is that organ download it It lets you it lets you make directive graphs from text files that look like this This is a valid graph this file. It'll actually output a graph like this So it's a very handy tool for visualizing visualizing your problems. So the next step I did was I Went to my gang of four book and you pulled out the visitor pattern and The reason I wanted to use this is because I need to examine these data structures from the outside I wanted to graph those data structures so I can understand how it works The way the visitor pattern works as we implement one method like this in Ruby the way we implement it Is like this we implement an accept method and what it does that looks up the class name And it dispatches to a different method based on the class of the object that you passed it So let's say we feed in an object in an arrow An arrow alias object this will dispatch to a method named visit error arrow alias and inside of visit Error alias we call we look at the we look at the class definition of Error alias We figure out what it's what its methods are that we can call and then we call into that one We call accept on their turn value of that and we keep walking through these relationships So as we're doing this as I was doing this I would I would visit one class and then I would get an exception But I knew the type so I would implement the method for that type and then go look at the source code and figure Out how to walk that type and eventually I was able to produce from from this class I was able to produce a dot a dot visitor to produce Graphics files that worked with the data structures used in the arrow and this is what came out Spaghetti so I Tried to figure out what data structures actually mattered and cut out some of the cruft at the bottom and I came out came up with this and What it was once I looked at this data structure. I really understood what the algorithms were doing when I thought about sequel producing sequel I thought in the Compiler sense, so we produce an ASD and we take that ASD and turn it into sequel So I assumed that someone would implement it that way I assumed I was looking at an ASD of some sort when actually it wasn't it was not ASD at all It was a linked list well We can talk we can argue some ethics, but it is a linked list and the way it works is we walk this linked list each Each item in the linked list contains the data for which you called it So if we say like post dot where one dot where two dot where three Then we have nodes for each of those calls and it stores the store is the parameters So we have a linked list that looks like this and it just stores all the values along it But the way that it worked is it recursively called back along this relation So it called back calling calling bind and bind would do the objects So we'd end up with objects like this. This is what our graph would look like now We continue on if your linked list continue forever. This would continue forever and to make this more concrete Here we had you know where one equals one where two equals two etc And it'll actually walk out in our tree will look part of our linked list will eventually look like this Okay. Now big O O stands for omg And and when your big O gets big enough you use a big Z which stands for is the omg So what big O is is big O is a mathematical representation of an algorithm that you're using and Really it can represent anything we can we can use this mathematical represent representation to nature The amount of memory that we're using or the amount of time we're consuming or the amount of function calls that we're making Really anything that we wanted to scribe about our program we can we can describe in terms of big O And I want to look at a few big O functions and then talk about the big O of error so here we have a constant time a constant time function where No matter what the inputs are it takes a constant time to produce an output Slightly slightly worse. We have a log in This would be for say doing a binary search we do When we give a certain number of output inputs we get a certain number of outputs and when we plot those it looks like a lot Here we have linear growth So it's proportional to the input here. We have n log n growth which would be like Heap sort or maybe a quick sort in the best case scenario Here we have squared growth. So these are the ones that you usually run into In order to find the big O all we have to do is we take we take an input a known input And then we measure the output and we plot that that's all we have to do And once we've plotted enough points, we look at what mathematical function would produce those same points. So Errol's big O We start with one object we get one to we get What is that three and I can't remember but it goes like this So as we increase those nodes in that list our graph of objects in memory looks like this And What does this look like? This looks like n squared. We said n squared You're Pretty much on but it's it's exactly this now as we increase n to infinity we find that the one half drops away and n squared the Value of n squared rises much more quickly than the value of n. So really we can just call this big O of n squared So now we understand the number of links that are in this list the number of the number of links that are in this list We square that and that's the number of objects that are going to be created in our system So when somebody reports a bug that says active record and Errol takes over two minutes to generate a pseudo complex Equal query we know why we know exactly why and unfortunately no amount of No amount of Small improvements like the one we were looking at earlier. We'll fix this we have to do we have to do deep improvements and The things I don't like about deep improvements I think that your system impact looks a little bit like this our knowledge grows But we can't really make many deep impacts to the system until we know more knowledge It's more expensive on us to make these things but In Errol's case, we know what the right solution is we know that this should be an ASD in a visitor These are known technologies generating sequel is a solved problem. We know that it can run in O n time so We wonder should I rewrite? We we have a clear solution We have many tests for this for this code. The public API is limited. So in my opinion. Yes We should rewrite this code six weeks later Errol today Errol is now on it took six weeks to rewrite is now two times faster for doing the post stop find one case The adapter specific code is dry If you run this, it's actually the post stop find one the simple case is actually slightly faster than rails two three Our flogs scores before We're about two two thousand five hundred the totals now it's about eighteen hundred the flay before Six hundred eighty four and flake complained about twelve times today. It's four twenty and there's seven complaints and What's even cooler is we have we have new features now we can do this So when we make a clarion in an active record, you can actually see the sequel parse tree So the output from this will look like this So if you want to see how complex your sequel statement is or how the parse tree looks it's right there for you so Errol tomorrow Right now we just have sequel compilers, but Errol just stores an ASD What that means is we can write anything to translate that ASD. It doesn't have to be sequel We already have people who are working on this or Integrating with Mongo so you can walk this ASD and go rather than produce sequel statements Go out and get data from Mongo. We can even write optimizers if we want to any type of fun compiler tricks We can do that with error so Concluded aka the things I've learned System impact it looks like this and right there in the middle is a very depressing time Our superficial improvements grow logarithmically so When we reach the top there seems like we're not doing much But when we want to do these deep improvements We can't really do much there either and our knowledge is too limited so it feels like we're stuck in a rut But if you keep going on and learning more you can actually apply these Deep improvements to your system and make things even better. I Learned when should I rewrite? This is the rewrite timeline. That's this bar is actually the time line the left side is is the earliest you should rewrite and the right side is the latest you should rewrite and I see it like this The earliest you should rewrite is when Ryan says so the latest we should rewrite is when I say so So you should probably pick a time in between If you need to know just ask Ryan then ask me and then pick a time in between We emphasize the art of code We should not forget the science. I want you all to learn the specific but embrace the generic Here are the photo credits. I need to say thanks to you able to win Thank all of you and one more thing. This is Ruby con Ruby cons 10 give Ryan a kiss When you're reading the talks it's rated by number of slides I had like 259 so I reading also spandex and kisses that is Yes in the back the question and say it last so the question is the question is when you're making these superficial improvements to You're not changing the O of n but you're making it faster How do you ensure that somebody else doesn't come along and change make it slower so that it's prettier? I don't know I can't give you I can't give you a good answer for that But what I would suggest is that if you take if you start using mini tests and start or mini test benchmark and start benchmarking your code and not necessarily failing past based on the benchmark but keeping track of the speed Over time you can plot that and then find where the errors are So and that's actually something you can go back and do later John Barnett was working on a gem called castigate. I think that lets you go back in time on your git repo I don't think the projects maintain now, but that might be a good idea You can go move back in time across your git repo plot statistics. You can find those errors Yes Benchmark you the question was I assume benchmark works in many spec and the answer is it does Okay And it is now released