 I knew this was a pirate-themed conference, but I didn't have any pirate outfits at home, so I brought the closest I have, which is like a Gilligan-type thing here. I had a pipe. I don't actually know how to use a pipe. I just, this is a prop. I was hoping to get one of those pipes. I wanted to get one of those pipes that like blows bubbles and stuff, but they look too much like toys, so I bought a real one. I need to figure out how to retrofit this one. Anyway, so I saw that Twilio was one of the sponsors. I have a friend who works at Twilio, and we were out the other day, and he just seemed like super hungover, and he wasn't really engaging with me very well. I thought he was just phoning it in. Yes! 40 minutes of this. Sorry, I just thought of that with an elephant. Anyway, so I'm here to talk to you about cat care and maintenance, and I want you to know that what we're gonna discuss, these are just best practices, okay? They're just best practices. So first off, I'm gonna talk a little bit about feeding cats. It's very important you need to feed them, otherwise they might die or run away or something. You need to do this at least once a month, at least. Very minimum once a month. Otherwise they get really upset and they look like this. They just get really upset. And then, actually, this is my other cat here. She got really upset. I don't know why they do this. They sit in the same bowl together. I'm not sure why. The other important thing you need to do to keep cats alive is make sure that you give them plenty of hugs, which I do very frequently. This is one of my hugs that we do. Like, we do this much more frequently than Fridays. This is an everyday thing. Much more frequently than feeding them, for sure. Also, you need to take care of their cat boxes. So I'm just gonna show you a little bit how you do that. Like, this is a cat box. You need to make sure to clean that out. But I have to, so I have to tell you something. This is not actually a cat box. This is actually a cake. Yeah, I'm totally serious. I'm very serious. There's actually a thing around this, like Google cat box cakes. You'll find, like, people do this all the time. It's hilarious. So I want you to know that all of that might be wrong. I'm not sure. These are just best practices. Again, your mileage may vary with these. So anyway, with that, let's continue to not talk about cat care. This is a Ruby conference. Let's talk about something else. So, hello. Hello. Hello. I have to admit I was hoping that no one would come this morning, because I'm scared. I thought maybe we could just go out and have a coffee and then pretend that this all happened. Like, I guess not. Anyway, my name is Aaron Patterson. I have come from the United States to bring you freedom. Yeah, freedom. Yeah, America. Yes. So I work at a company called Red Hat. And I'm on a team at Red Hat. I'm on the Manage IQ team. We develop a product for managing virtual machines. So any type of virtual machine that you might have, if you need to manage many virtual machines, you should use our product. It is open source. You can go there and get it. I'm on the Ruby Core team and the Rails Core team. This does not mean I know what I'm talking about. It just means I'm terrible at saying no. You can find me on Twitter as Tenderlove, GitHub as Tenderlove. Instagram as Tenderlove. And you can also find me on Yo as Tenderlove. So if you want to Yo me, you can use that name. I'm sure my phone is gonna start buzzing soon here. I am the number one contributor to Rails. You can see that I have many, many internet points. Many points. Yeah, I'm actually, I'm thinking about trading these points in for a flight somewhere, you know, go on vacation or something. Anyway, I want to give you all the secret to becoming the number one committer. There's actually a secret to doing this. I'm, you know, I don't actually know anything special except for this one secret. This one secret, other Rails committers hate me. There's this one secret that I don't want you to share with anybody else, but the secret is that revert commits count too. So the more mistakes you make, the more points you get. So, you know, you all can be number one as well. Just make a lot of mistakes. That's how I do it. So I have, as you saw earlier, I have two cats. This one is, his name is Gorbachev Puff Puff Thunder Horse. We just call him Gorby. And then my other cat, her name is SeaTac Airport YouTube Facebook. We just call her Choo Choo. Her natural habitat is on top of my laptop. This is actually where she grows. We set her there, water her a little bit. I told you we would talk about cat care maintenance, right? You probably can't see it, but she's just totally mashing the keyboard there. Anyway, I'm so glad I use Git, wow. So anyway, recently I've really been, like I've been studying Node.js a lot recently. And the reason I've been doing this is I wanna get a lot closer to the metal, right? Like I'm trying to get real close to the metal and I know Node.js is a way to do it. But like recently I've actually done it. I'm very close to the metal now, I will show you. I'm extremely close there. Look how close that is, it's amazing, very amazing. Anyway, so Node.js, yeah, good stuff, so close to the metal. Oh, we're in Belgium. Did you, did any of you realize this? This is amazing, we're in Belgium. And one thing that I really like about Belgium is that there's a lot of great beers here. One of the best beers is actually imported from the United States, it's called Budlicht. I don't know if you've ever heard of it. It's Budlicht, she'd give it a try, it's very good. But I mean, I also try, I didn't just drink this last night, I also had like traditional Belgian beers like Stella. It's a very, very, very good beer. And I mean, like, don't worry about it, like I'm getting all the Belgian culture, like I had Belgian fries last night, went to this traditional fry place, it was super good, so really, really good. And last night I was telling some people, I gotta tell a story. So recently, recently my parents found out my name. So I wanna tell a story of, like I tell my parents what I do, right? I tell them, well, I'm a programmer and many people know who I am apparently and I like to program a lot and I guess I'm okay at it, but I never tell them my name. They don't know, they don't know that people know me by tender love. So there was a conference in my hometown and I decided to myself, you know, it was like I would really like my parents to see what I do someday, like it should be great, like at least see me give a talk once. And the organizer said, hey, we had a person cancel, would you like to come speak at our conference? And I said, yeah, sure, but only if you give me two extra seats, like I want two extra tickets for my parents because I'd like them to come see. And the organizer was like, yeah, sure, absolutely. So, you know, we go to the conference, arrive there in the morning, I meet the organizers, you know, my parents were there too, we meet the organizers and like, he's like, oh great, you're all here and he's smiling and stuff and I'm like, yeah, that's great. And he's like, we've reserved three seats for you down at the front row, like here I'll take you down there. So we go down to the front row and there's three seats there and they have a sign on each of the seats. And the first sign says tender love, the second sign says tender mom and the third sign says tender dad. And I'm just like, no, no, not now, not now. So I'm like, okay, okay, something you need to know about me before I give a talk here is that people on the internet know me by this name, tender love, just don't worry about it, just be cool, people are gonna ask you, like just don't worry about it. Really, so they're like, okay, okay, I could tell they had more questions for me, but fortunately my talk was like right then. So I'm like, I gotta go. So yeah, we haven't talked about it since then. I don't know exactly what they think about it. So yeah, it was kind of weird. I was forced to give that up. Anyway, so let us continue on, I've shared this topic. I wanna talk a little bit about improvements to Ruby. Like I was thinking about improvements to Ruby lately. I think it's very interesting. So I wanna go through some of the improvements that we've had to Ruby over the past 10 years. So if we think about Ruby, like what was Ruby 10 years ago? If we think about what Ruby was 10 years ago, 10 years ago Ruby was an ASD interpreter. It was an interpreted language that would just interpret an ASD and what this means is that if you had some code that looked like this, it would get parsed and turned into a tree and stored internally as a tree and the tree would look like this. And the way that this code would get interpreted is that the interpreter would walk this tree and evaluate each thing. So it'd go to the if statement and say, oh well we have some conditionals, we better test double equals. Well how do we test that? First we need to evaluate foo, then we need to evaluate bar, then we need to check whether or not they're equal. And if they're equal then we're gonna go walk over to the true branch and if they're false then we're gonna go walk over to the false branch and execute each of those, execute that way. So that's how our entire program worked. And it didn't allow for certain optimizations like we couldn't do, we couldn't do some of the optimizations that we can do today on a virtual machine like people optimization, other various VM optimizations. So this is the way it was 10 years ago. 10 years ago we had to stop the world and mark and sweep garbage collector where the entire world would just stop. Like it's just GC time, it's like okay hold on a sec, everybody hold on program, one sec. It would go through, mark all the objects, sweep all the objects and then say okay I'm done, now go ahead and continue. So you'd see these weird jitters in your program. Still see them today but not nearly as bad as 10 years ago. Today if you look at Ruby of today, today we have a virtual machine. We actually have a virtual machine built into MRI. This came out in Ruby 1.9 so it's actually pretty old but I mean 10 years ago we didn't have this at all. In 193 we started doing lazy sweeping in the garbage collector which basically incrementally sweeps objects away. So it decreases GC time by reducing the average time we spend in GC. In Ruby 2.0 we had bitmap marking and what this is is we keep a table that maps objects to whether or not they've been marked. It used to be before when during the marked phase of the garbage collector we would go through and mark each particular object and what happened was each object would be modified in memory and the reason this was bad is because it wasn't copy on right friendly. So if we forked off a process like we were talking about yesterday forking off processes, if we forked off processes as soon as that object got marked it would have to be copied into child processes. So what bitmap marking did is just said okay we're gonna keep a smaller table that has all of our mark bits on it. So only that smaller table needs to get copied among child processes. So this helped out a lot with memory use utilization in the garbage collector. The next thing we had now in Ruby 2.1 we have a generational garbage collector. This is actually pretty amazing. It separates the generational garbage collector as a restricted generational garbage collector. It separates objects allocated in C versus objects allocated in Ruby and also has a right barrier for those objects so that we can actually have multiple generations for objects allocated in Ruby. So then thinking about tomorrow like just coming up around, coming up soon here this year we're gonna have Ruby 2.2 which is gonna introduce symbol garbage collection. We're actually gonna have symbols be garbage collected and this is if you watch any of the security releases for Rails you'll know that this is actually a huge deal. This is a big deal because a lot of our security vulnerabilities are due to denial of service attacks where symbols are not garbage collected so we may allocate a symbol inside of Rails that'll never get garbage collected and if people keep allocating symbols over and over again it'll use up all the memory and crash the process. So 2.2 we're gonna have symbol garbage collection. There are some caveats to this and we can talk about it in the hallway later during Q and A but yeah we're gonna have symbol garbage collection. This is huge we're actually gonna have incremental garbage collection in Ruby 2.2. So previously we talked about incremental sweep phase the lazy sweep in 2.2 we're gonna have an incremental mark phase as well so even further improvements to the garbage collector and actually a few weeks ago I was at Ruby Kai yeah I saw some presentations there. One of the presentations I saw was actually a JIT for MRI. A true JIT this thing would just in time compile Ruby code down to machine language so in turn you could get the demonstrations that the person showed gave us between two and 10 times faster, two to 10 times faster code for using machine language or machine instructions. So you can go check out the project there. It had some bugs but I mean it's amazing to see this stuff which is why I put question question we have no idea when this will come in but the point is these are unimaginable improvements these are amazing. 10 years ago we would not have imagined this. 10 years ago we would have said oh garbage collection like generational garbage collector no way impossible, impossible. But even today you'll look if you look at the benchmarks game you'll see we're actually starting to be Python in benchmarks which I think is interesting because for the longest time Python people have trolled us about how fast Python is versus Ruby but it's actually we're actually starting to catch up which I think is really really awesome. So this graph shows Ruby time divided by Python time which means that the lower the bar is the better we are in that benchmark but what really annoys me is that people still complain people still complain they're like oh Ruby is slow or Ruby you know this or that the garbage collector sucks all this stuff but if you see look at these slides you'll see like it's not true Ruby is getting really really good. There's just a lot of this FUD that's out there so what I wanted to do is I've decided to invent a new language a new programming language and I'm calling this language Poo Lang this is the language. This language is specifically engineered to be the worst language ever so even worse than PHP. So like the point of this language is that so if anybody ever says to you like if you think Ruby is bad have you seen Poo Lang? Come on I mean that is the worst thing ever like side benefits are side benefits about this language include many many jokes for example in Poo Lang everything is a code smell. We're not class based we're pile based. I've decided that I'm going to implement this entire language in Excel for speed I guess and to see if I can do it it'll be amazing this will be the best language for working with Excel ever because we do that all the time. Anyway so Poo Lang will be the worst language ever and you can go check out our website PooLang.org the website has some known issues as soon as I posted this website like this is what it looks like in Safari this is what it looks like in Chrome. So I get this but like I put this up there this is literally the entire site right I put this up there immediately somebody files it to it doesn't work on Windows. You see this box somebody else files it to go it doesn't work on Chrome but I need you to know I don't care. I don't care the language is called Poo Lang I mean come on seriously. Anyway so yes go ahead and check it out we'll be releasing some Excel code shortly. Anyway today let's move on to some serious topics very serious topics especially for 9 30 AM this will be amazing we're going to talk about some GC garbage collection tools we're going to talk about memory profiling we're going to talk about speeding up helpers in Rails and we're also going to talk about speeding up like output from Rails as well this is just some of the things that I've been working on in Rails recently and what I want to share with you today are the tools that I've been using for doing profiling against Rails so hopefully you can use these tools with your application as well and I'm going to focus mostly on memory profiling and that type of performance. So first thing I want to talk about is some GC tools now most of these GC tools come built in with Ruby and I'll point out when it's a gem versus when it comes built in. The first one I want to talk about is object space dump and object space dump all these ship with Ruby I believe it's in the standard library and the way that you use it is you use object space dump with one particular object you give it you give it a Ruby object and it gives you information about that object so you use it like this here I'm finding a model from ActiveRecord and I'm dumping that and the dump output is JSON and this is the JSON output from it and you'll see like you get the address of the object the memory address of the object oh by the way these APIs are extremely specific to MRI most of the stuff I'm presenting today is very specific so don't expect this to work on other virtual machines. So it tells you the type, tells you the class instance variables it also gives you references and stuff and you'll see down there that WB protected that's actually the flag for whether or not this object has a right barrier protection so the next one is object space dump all and what this does is it dumps your entire heap to a JSON file so if you execute this code you'll see the return value is just this file like file handle and that's a JSON file that's full of your entire heap it's your whole heap dumped out as JSON what's cool about this is I can say like okay I'm gonna take that previous object the object ID or the memory address that I had from that active record object and I wanna actually visualize the memory about that object so I can see like what that object is and all of its references so if I take those two pieces of information I can parse the JSON file and reconstruct all the references turn it into a graph is dot file and you get something that looks like this so you can see at the very top there is the active record object those are all the references that it holds going down the object tree or object graph one thing that I think would be cool and this doesn't exist today is it'd be really cool if we actually had a dump server built into the Ruby process itself so we could actually connect to that connect to the process and say like hey dump all this stuff out to dump your heap out as a JSON file and give it back to me we could probably do this pretty easily today in Ruby except that doing it in Ruby would impact our heap dump so that might defeat the purpose anyway the next tool I really like to use is a tool called GCStat this is also built in you can call GCStat two ways you can call it with no arguments and it will return a hash or you can call it with an argument and the argument is a key for that hash so the hash is just a bunch of statistics about the garbage collector and I haven't listed all of them but there's like a bunch of different keys now I prefer to use the second call which is GC.Stat with a parameter the reason I choose to do that or the reason I typically do that is because calling GC.Stat actually allocates a new hash which means that monitoring your garbage collector also impacts the garbage collector and we don't want any cats to die or anything like that so it's better to use that bottom one so you like here are all the keys for it I know it's very small but I need you to read them all and memorize them because there's gonna be a quiz at the end I'm just kidding so the main one that I like to use is total allocated objects and what this one does is it returns to you the number of objects that have been allocated in your system ever ever so starting every time an object is allocated this incrementer gets bumped up one so you know like how many objects have been allocated throughout your entire system ever and you can see how this works like here's an example of it we'll say it will allocate we'll ask say like give me the number of objects allocated then we'll allocate 10 objects then we'll ask again and you'll see the return value here those two numbers on the left and right are exactly 10 different from each other because we allocated 10 objects so we can use this to figure out like well how much does it cost to call find on active record so we'll say like all right give me the total number of allocated objects find 10 records and then count the total number of objects and then divide by the number of records that we found and then we know how many objects it took how many objects we had to allocate in order to find one active record object or find one model so what I did is I took this particular test and I ran this across a bunch of different branches and these are the branches I ran it across down along the x-axis is the branch that I used or the branch I tested against and the y-axis is the number of objects allocated per active record model so you can see like 3.0 stable we went up and then now at 3.2 stable is the worst and we're going down with 4.0, 4.1 and finally we get down there to master and master is very good compared to all of these I don't have the 2.3 numbers up here but 2.3 was actually down around master and the thing like I kind of think this is sad because like 3.2 that's a lot that's huge the thing to take away from this graph is please please upgrade please upgrade but you know we shouldn't have gotten we shouldn't have gotten up to 3.2 stable levels and I think this is this is interesting because we don't actually have a way a good way in Rails right now to make sure that we don't regress with things like this we don't have a like we have a very large test suite so we make sure that there aren't any bugs introduced but as far as like performance you know runtime performance or memory allocate memory usage we don't have any ways to measure that over time very well right now so this is we're just starting to do stuff like this and this is like first examples of it and now that we're actually looking at it we can say oh wow how did it get so big let's fix it so I want to I also want to show like this is a demonstration of testing a request like I wanted to study how many like how many objects do we allocate for just making a request through the system and what this does is it constructs a rack environment and it pushes a request through the system and then measures the number of objects allocated for that so what this is testing is just the books new this is just a normal scaffold like scaffold page for books new the first thing we do is set up the request this like sets up the caches basically heats up our application and then we actually run the test down here and figure out like well how many times you know how many objects have we allocated through the system and this is a this is a graph of it there's a graph of the results you'll see along the x-axis those are our branches so master is looking extremely good like we're down there very very low but you need to know that the very bottom number is 2000 so this is not just to talk about performance or cat care this is a talk about how to lie with graphs so if we make that very bottom number zero it looks like this which is very sad it makes me very sad so this this graph doesn't look very good but you need to know this is actually a 19% reduction in object allocation since 40 stable this actually this sounds very good 14% reduction since 41 stable and we're gonna get even better like I have ideas for improving this even more so I think we're gonna see even like an even greater reduction of object allocations in the future oh thank you I will use this chance for a water break some fine vintage water here 2014 so the next thing I wanna talk about is this gem allocation tracer and this gem was written by Koichi Sasada he's been working on all of these awesome GC features that I showed you earlier allocation tracer gives us a really amazing view into the garbage collector and the memory consumption that we have in our processes so definitely check out this gem I'm gonna walk through some of the features of the gem not all of the features because it actually has a ton of stuff in there so one thing I like to use with this is I wanna look at total object allocations for an active record object so this is how you use allocation tracer you just have trace you give it a block inside the block you run your test code and this will tell me the total number of object allocations but it doesn't buy file and line right so if I run this and sort it down at the bottom there I'm sorting it by the most the highest number of object allocations you'll come out with a result that looks like this these are just the top I don't know top four I suppose top four locations for object allocations when running that particular test code and the top here the very well I guess it's the very bottom here the worst offender is hash accept and what hash accept is is you say like it is an active support thingy that's like hash dot accept you give it some keys and then it's like I'll give you a new hash that doesn't have those keys in it right and this is what the code looks like so this is what it looked like inside of active record we'd say like okay types dot accept give it a bunch of keys and then iterate over this hash and do some stuff with it and you'll notice that this actually creates allocates a new hash like I said it allocates an array because it accesses the keys calling with star args allocates another array so you can see where object allocations are starting to build up from this and essentially if you look at this code essentially what we wanted to do here is we wanted to iterate over one hash but just skip keys in the other one right we just wanted to skip those keys that was like the point of this code so I refactored it to just say okay let's use each key and go to the next one if if that key is contained in the other hash right so what this did is we changed keys I didn't use keys dot each you'll notice I used each key and the reason I did this is because calling dot keys will actually allocate an array so we'll allocate an array and then iterate where you can call each key on the hash and it won't allocate an array it'll just yield each key in the hash so you can avoid an array allocation there so then here I said well we're just gonna do next if key and the reason we do this is because this avoids allocating a whole new hash that we're just gonna throw away it also avoids allocating an array which is what that splat args does so we can say well just skip if we have that key now if we apply this patch in both places where we're using hash dot accept I'm not showing the other example of hash dot accept it's slightly different mostly the same but slightly different if we apply that patch we'll actually see that the allocations goes down even further so that's right there we have master which is what I showed you earlier and then master plus one which is this commit what's also interesting is that we can get allocations by type so that previous output that I showed you was total allocations per line and that doesn't really tell us like what was being allocated we know us together in this room this morning we know by looking at that code that we are allocating hashes and allocating arrays but maybe you don't know that like you look at the code and maybe you don't know that you know that cause I told you or you know that cause you have experience with that code but maybe you're not sure so you can use allocation trace to define what types are being allocated and you do it like this you say give me the allocated count table and that will output all the objects the total allocated objects by type so you get a hash back that looks like this this isn't the entire hash this is just a sample of it but you can see like you know we have 68 strings allocated 63 arrays allocated et cetera et cetera so we know what types were allocated and here's a graph of it over time as well so these are along the x-axis there is the type and then y-axis is the count and the different colors represent the branches and you can see like t-data, object, hash, nodes those don't change very much it's really the array and string that we want to look at these are our big movers and shakers there and you can see like master we reduce strings greatly also arrays we reduce a lot and if we break this down by just master plus that one commit if we just want to compare those two compare master plus the accept patch that we are applying this is the change that we see we see okay well we reduced hashes and we also reduced array allocations we didn't touch strings but we all knew that looking at the code we knew that before but maybe you didn't know that previously and you can use this tool to find out what exactly or what impact you're having on your code base so the lessons from this are one avoid active support and less performance doesn't matter this isn't necessarily true I shouldn't bash on this completely what I'm saying is like if you have some bit of code that you know is a hotspot you probably don't want to be using active support in that particular case active support is probably doing extra work that you could just be doing with regular Ruby code so you know measure your code look for those hotspots and avoid probably avoid active support in those particular cases also allocation tracer is amazing like really get this gem check it out like try it out on your code that has way more stuff but I did not cover all of it so the next thing I want to talk about is speeding up helpers so I'm going to use these tools that we were talking about to speed up helpers in Rails and if we take a look at profiling or a request and response like we saw that benchmark previously if you look at the output from that request response benchmark you'll see output that looks like this this is our the percentage of time like where we're spending time and you'll see that the very top line there where we're spending the most time is in active support safe buffer initialize this object is used for HTML sanitization in Rails so if we rerun this benchmark and find where those objects are being allocated like we use allocation tracer figure out where these things are being allocated inside of Rails you'll see that they come from this particular method called tag options and what this method is for is for outputting the options in your tags so like for example in your form tag you'll have the action attribute or in your A tag you'll have the H or F attribute all those that's what this method is for is outputting those and it all comes from this ERButils.h so it comes from this this is the thing that actually does escaping on your code escaping on the value so to understand what this method does let's talk about HTML sanitization in Rails in Rails ordinary strings are considered to be dangerous so we say if you say like give me a string we say x is equal to a string we check it it's a string class you ask if it's HTML safe it says no it's not and what this means is that when we output that string we're gonna escape it right so when we go to write that data out to the client we're going to escape it now let's say you have a string that you consider to be safe you can call .html safe on it and that'll return to you and act to support safe buffer and if you ask HTML safe on that it'll return true now the important thing to note about this is that this is just tagging the string right this is just tagging it as HTML safe you could actually have some dangerous data in here but what it means when you tag it as HTML safe that means that Rails will not touch it we keep our hands off of it and we write it out to the client as is so you need to make sure that it's escaped before you tag it as HTML safe like this so the erbu-tils-h method what that does that does both of these things so the important thing is that erbu-tils-h it actually does the escaping and the tagging so we have two separate processes here escaping and tagging right now if we look at this method we'll see well it actually generates a string using gsub so gsub this does the escaping escapes it into a new string and then we actually call HTML safe on that which allocates another object the safe buffer in Rails is actually a subclass of strings so we're allocating two objects here two strings so we allocate two objects now if we go back and look at the caller we'll see okay we call erbu-tils-h on value we assign that over to value and then we immediately interpolate value into another string like it's immediately interpolated into another string and this is a real string so now this return value is considered to be not HTML safe even though that value was HTML safe right okay so if we're thinking about this this is a total of three object allocations so we had erbu-tils-h allocate two now we're allocating one more with this string at the bottom so we've allocated a string at first that was our escaping we allocated a safe buffer which was when we called .html safe and then we allocated another string when we returned at the very bottom so if you think about this we're actually taking that safe buffer and throwing it away it's getting interpolated into that string and just thrown away and what is the point of the safe buffer if it's put back into a regular Ruby string there's no point so my idea was we'll just remove the safe buffer we don't need it we don't need to do that test so to fix this I extracted a new method called unwrapped HTML escape and all this thing does is just the G sub that's all it does it returns an escaped string but it does not tag it it doesn't tag it as safe it just returns the escaped version then I refactored the original method to call unwrapped and then call .html safe on it so now it's completely backwards compatible right so then I updated the callers to say okay now call unwrapped HTML escape and then it assigns that to the value that value gets interpolated into the string so now we only have two object allocations just that string the first escape string and then that second interpolated one now we only eliminated one object but what was interesting is that this actually decreased the allocations 200 allocations per request for that particular thing for that one particular scaffold and if you think about this like that seems like a lot of objects to be to reduce but your mileage yes where I realize that we're not in the freedom country here we measure by freedom units mileage anyway your mileage may vary on this because it's really dependent on how many tags you're outputting in your HTML right if you're outputting a whole bunch of tags then this optimization is going to be a huge win for you so the next thing I want to talk about is feeding up output using the law of demeanor and I think it's interesting that people call it the law of demeanor because like you can't get arrested for violating it right like nobody's going to come out to you and be like oh you violated the law of demeanor you're going to jail well except in the US that happens there anyway so the law of demeanor basically says like we're going to I'm not sure exactly what the whole definition is but the way I interpret it is like we're only going to handle certain types we only want to handle like it's not about the number of dots in your method it's about the number of types that your method handles so we're going to talk about how to use that in order to speed up speed up output from Rails so let's take a look at an ERB template this is an ERB template we compile that ERB template down to some ruby code and then evaluate it and cache that this is what the compiled template looks like please read it carefully again, test at the end please memorize, I'm just kidding so if we zoom in on part of this we'll see like this is what a very small chunk of that ERB template looks like we call safe append there with a string this is an HTML literal this is the literal that was in your ERB template we call safe append with that now if we go look at the implementation of output buffer safe append equals if we go look at that implementation this is what it looks like this is the method right what's interesting what's very interesting is that the ERB compiler guarantees that this method will never be called with a nil it's always called with a string always so why are we doing a nil check like I looked at this time like the ERB template guarantees we have a string why are we calling with nils who cares about nils we don't need to handle nils so I just said all right we're not going to handle nils anymore goodbye if you called this method with a nil don't do that call it with a string so I removed that line and we ended up with this so then it's like okay we're calling super with value.2s but we know that the ERB again earlier I said the ERB compiler guarantees we're being called with a string so what's the point of calling 2s on this we know it's a string we know it's a string so we'll remove that remove that part now value like we're just calling the super class is just value right we're just passing in value and we know that if we just call super without any parameters it'll do that automatically for us so we just remove that right um but now we know we're just we're just calling straight super we're just calling super we're not doing anything in this method which means that we can just completely remove the method so now it just goes away and this is our extremely high-tech code it's a very very high-tech um so I'm not sure if this is this is a law of the meter or not like we're handling this particular nil before that before that method used to handle two types that handled nil types and string types and maybe something else we don't know anything that responded to us but we know that that method really only needs to handle strings it really only needs to handle strings our ERB compiler guaranteed that we would only be getting strings so I'm not sure if this is a law that was a law of the meter violation or not um I think maybe it was it was probably defensive programming I think somebody was actually writing that method and said well what if somebody passes us a nil what if somebody passes us this or that but it's very powerful for us to say like we know what this is gonna get we know it will be this we know it will be this and we'll only code to that we'll only deal with like if anybody starts passing in a nil at that point we're gonna probably say hey don't do that or if there's an extremely good reason for them to pass a nil then maybe we'll start handling that but the first thing we should default to is like stop it make sure you pass us a string right it's very powerful for us to say I only handle strings so if we go back and take take object or allocation trace or gem and measure the results like let's take a look at what the results look like this is a graph of the different types each color represents the branch and you can see like we've actually dropped tons of strings tons of arrays and tons of hashes as we go down the side there again this is essentially a rehash of our previous graph that just gave us a total allocation but now we know like what has actually been reduced per branch so again like this may not look impressive but again it's 19% reduction since 4.0 stable 14% reduction since 4.1 stable so to conclude this I think I'm hitting my time if you can the best thing you can do in your system is to eliminate objects if you can eliminate object allocations it'll greatly help out your performance the next thing I don't know who said this but no code is faster than no code really like I didn't even bother measuring the performance of that previous when we were looking at that output buffer thing like I didn't bother measuring the performance of that because well it's deleted we're not even executing that code anymore what's the point obviously it's faster first we were doing something now we are doing nothing right that's weird to say now we're doing nothing anyway limit types limit the types that your method handles if you can limit the types you'll end up with less code the fewer types you have in your method the less code you'll have we saw that in the previous example less code means faster code but the important thing is make sure to measure all this stuff measure, measure, measure everything you can't know if you've improved unless you measure so after you've done all this stuff one more time measure again please so thank you for having me so much I appreciate being here this is an amazing conference thank you if we have time for questions please otherwise thank you very much