 Can you hear me okay? Good? Rock on. I think we'll go ahead and get started. So yes, the hitchhiker's guide to Ruby GC. I, like Justin, speak very quickly. So if I do speak, I get going, I get excited because Ruby's exciting and garbage questions is cutting. If I start talking way too fast, you guys could just kind of like, I don't know, like some kind of, something. Yeah, yeah, exactly, please. That would be awesome. So unlike Justin, however, I don't have any TenX insights for you. I do make silly jokes like Aaron does, but unlike Aaron, I am not gonna offer you life-changing insights into the Ruby Virtual Machine. My talk is, in fact, literally garbage, or about garbage. Speaking of, did you guys hear about this, this controversy about the show Dirty Jobs? Maybe it was just me. Basically, I guess that has something to do with the huge number of micro services that they're employing. You can, yeah, thank you. Somebody laughed at it. All right, good, good, good. Yeah, I also don't like enjoy my puns nearly, like as much as Aaron does. So I'm gonna just put a line through that. I'm not gonna do that anymore. Anyway, all right, cool. So yes, I should say I'm learning a lot. That was a terrible joke. I'm learning a lot at this conference. I'm learning a lot at RubyConf. And I'm trying to apply it immediately. So Gary had a great talk about ideology and belief, and it's one of those things that I'm sort of trying to internalize. But it's hard, right? Like we don't know what we don't know. And even when we know it, sometimes we are unable to disarm ourselves. So this is my first time giving this talk. This is actually my second talk ever. I spoke earlier this year at RailsConf. So I'm going to try to confront my own imposterdom and also channel Aaron and not die. So, and also you'll see that my nervousness manifests in obnoxious slide transitions. So there's that. Anyway, so my name's Eric Weinstein. I work at a company called Conde Nast. I don't know if you guys have heard of Conde. You're probably familiar with the various brands. So there's Wired, The New Yorker, Vogue, GQ, et cetera. I really like writing Ruby. I write JavaScript a lot at work. So it's nice to be able to write Ruby for some of those projects and on my own time as well. The Conde Nast Entertainment folks actually are all using Ruby in Rails. So we are hiring. I feel obligated to say that we're hiring. You might be interested by this strange hash and why is it a hash and not an object, like a person.new. And that is foreshadowing. So you will see. There's a lot of literary devices in this talk. So also obligatory self-promotion. I wrote the Ruby curriculum on Codecademy a couple of years ago and I also wrote this book called Ruby Wizardry. It's for kids ages eight to 12. There's a really great Birds of a Feather organized by Jay McAvron, who actually is doing a talk on method lookup later in this very room. So I highly encourage you to go to that talk as well. And if you're interested in learning Ruby, teaching Ruby, kids, Ruby, nouns, please come see me after the show. So No Starch has been delightful. It is offering a 40% off. So if anyone does want the book, RubyConf 2015 and it'll be 40% off, I think for the week. Garbage collection. So there's a lot of mythology around GC and GC tuning and sort of how it all works. It's actually not as bad as it seems. And for those not familiar, the don't panic is kind of like emblazoned in large friendly letters on the cover of the Hitchhiker's Guide to the Galaxy. This is also as much for my benefit as yours. So part zero, because this is a computer type conference and we should start at zero. Ruby is not slow. Okay, yes, sort of sometimes depending, it can be. But not for the reasons that you think. So people will say, okay, well, my Ruby program is slow. It must be a database thing or it must be, there's some super linear, crazy, like four deep nested loop and I'm doing something bananas in there that I shouldn't be doing. Or Ruby's an interpreted language and can't possibly be fast and we should all be using Go or Java or something. And so really what I found is that when Ruby programs slow down, these aren't always, or even often the culprits, the object space in NGC is actually extremely rich part of the language and not surprisingly, when you have a lot of richness, there's a lot of nuance and performance bugs can hide in there. These things are true better on the screen. They sort of do make some operations slower than Java or a confiled language. But the reason that we have all this for better and worse is because everything is an object. Not everything blocks or not objects, but you get the idea. So let's take some time and talk about the history of garbage collection in Ruby. First we're talking about MRI, C Ruby. We're not talking about Rubinius or JRuby, which have different garbage collectors. I'd wanted to include them in this talk, but unfortunately I only have so much time and they ended up on the cutting room floor. I also don't know nearly as much about them, but if you are interested in Rubinius or JRuby or have some cool stuff to share, please come find me. I may make comparisons to these, alternate implementations if the comparisons make sense. So they might kind of make guest appearances, but by and large the talk is about MRI. So let's talk about Ruby 187. 187 uses tracing as opposed to say reference counting GC, and GC traces are sort of like, you're looking for reachable objects in the object graph. So if by traversing objects I can get to one, that object is reachable and great, that should not be collected. There are still references to it. As demonstrated in the graph. And if they're not reachable, if an object is not reachable, it is eligible for collection. 187 used very simple mark and sweep. Mark and sweep was invented by John McCarthy, that's right. It's not Alexander Graham Link. For Lisp in 1958 or 59. And it's astounding that Ruby had gone so far with just simple mark and sweep garbage collection. And garbage collection works this way in mark and sweep. Ruby will allocate some amount of memory, we'll talk about details in a little bit. When there's no more free memory, Ruby says, okay, great, I'm gonna go through. I'm going to look for all of the active objects that I can find. I'm gonna mark them as active. And then anything that's inactive, I can sweep onto a new list, and that's where I'll go when I need more stuff. So this is what Ruby is doing. This was a fun animation, I like the arrows moving. Anyway, so the important thing to realize though, when you're marking and sweeping and marking and sweeping and having a great time, is that everything stops. Because reachability in the object graph can change from when you are marking and then sweeping. It can change while the program is executing, and it does change while the program is executing. So in order to mark and sweep, the collector has to stop the world. So if you hear people talking about stop the world garbage collection, major garbage collection, this is what they're talking about. Everything stops, Ruby marks, sweeps, everything's great, and then continues execution. So in 193 we got some improvements. We got lazy mark and sweep. And this is an improvement because it increases, or rather it reduces the pause time, by saying, okay, I'm actually just going to sweep in phases. It doesn't really do anything to the overall amount of time you spend collecting garbage, but if you say I'm gonna, garbage collecting is gonna take half a second, and I'm gonna do it 10 times instead of like one second and doing it five times, you know, you don't get these unacceptable hang times, while garbage collection is happening. And if you do have like a highly eventful or IO driven application, you have a web server, you have something with a GUI, and you can't just like sit there for a couple seconds while you're collecting garbage, this is a big win. Unfortunately, 187 and 193, both subvert native copy on right, which we will also talk about. It will be on the quiz. That's a huge lie, there's no quiz. Just more information. Anyway, so this doesn't reduce the kind of overall pain of stopping the world, but this is sort of amortized that over more sweeps. So in 2.0, we got bitmap marking. So we are no longer marking objects directly, rather we have a bitmap that represents the state of objects and their eligibility for collection. This will be extremely important later because this is what sort of allows us to use copy on right and have a great time. I'll be covering all these in depth pretty soon. I just kind of wanna show you where we're going on the journey. That is this talk. So yes, so if like Aaron, I don't die during this presentation, I'm gonna be turning 30 in March, which is kind of like when I become old, I guess. I don't know, I guess each decade is a collection. I'm talking broadly about Ruby 2.1. So we have generational garbage collection, two generations, a young one and an old one. And if you survive three collections in 2.1, you become old. And this is sort of based on the, what's called the weak generational hypothesis, objects die young. You have a lot of objects that appear and do something and then are gone. And so based on the fact that objects tend to die young, it makes sense to do kind of fast minor GC frequently and the slower stop the world collections less frequently. And if you're interested in the Argin GC algorithm, Kaiichi has a talk, I think from Yuriko a few years ago, which is excellent and there's a link at the end of this presentation. So that was sort of the history of Ruby 187, 19, and then two. I think it's 2.1 and now we're gonna talk a little bit more in depth about 2.2. Two, but it's 2.2 in general. And we'll talk a little bit more about copy on write, bitmap marking, all the stuff I mentioned. But here I'm just gonna focus quickly on two things, incremental and symbol GC, which we have in 2.2. And we'll also talk a little bit about garbage collection tuning, which is scary and amazing. So symbol GC, you may have, if you've read, Shneem says an excellent write up on symbol GC. And if you've read that or if you've been writing large Rails or Ruby applications, you're familiar with this notion of like symbol denial of service, right? You have a bunch of symbols, they never get collected because they don't, they're alive forever. And if you were to do something strange but not on the face of it silly, which is to, I don't know, let users generate something that will create a symbol, and someone creates thousands and thousands and thousands of symbols, you will eventually run out of memory. That is bad. It turns out. And so what we have now is the ability to reclaim some symbols, not all of them, and allow those to be collected when there are no more references, when they're no longer needed. And the reason that I say some and not all is because Ruby internally will generate symbols for say, method names, and you could easily toss yourself again if you did something like, I guess, dynamically generate a lot of methods that'll get their own symbol. That's unusual. Anyway, so we have that, which is nice. But the really cool thing is we have incremental major GC. Again, there's another algorithm that there's excellent blog posts and papers on it that encourage you to look up. The cool thing here is the sort of tri-color marking. And many of you are probably familiar with this. But the idea is that we have three types of objects. We have white, which are unmarked, gray, which are marked, and may refer to some white objects, and then black, which are both marked, but are marked but do not refer to any white objects. And here's how the algorithm broadly works. You say, okay, all of my objects are white. Everything that's alive, obviously alive, like things on the stack, those are gray. And now I'm going to start picking up, I'm gonna pick one gray object, and I'm gonna keep doing this for all of them. I'm gonna visit every single reference and color that gray. And then when I'm done with this, I'm gonna go back to that first gray object and say, okay, this is now black since it does not refer to any white objects. I'm gonna keep doing this and keep doing this until I only have black and white objects. And this tells me, okay, great, I have black objects, which are obviously alive and white, which are obviously available for collection. So I'm gonna go through and sweep, and that's sort of how the algorithm works. This third part, the part where we go through and we do all the color changing and we change the original node back to black, this is sort of what's going on in the incremental part of the algorithm. Like Ruby will do some execution, and then it will do some sweeping, and then some execution, and so on. But there is a bug, and I learned this from Aaron. Emoji, so yes, I'm learning a lot at this conference. I was gonna make it like dance around, but yeah, that was a bad idea. So anyway, so what's the bug? The problem is what if you have a white object appear, so we create a new object, and there's no gray objects with references to it because they're only black and white objects, we are going to inadvertently reclaim this live object by mistake. It will be white when the algorithm runs, the next time it finishes executing some Ruby code and says, oh, this is a white object, it is available for me to collect, and our live object that we wanted is gone. This is a bad thing. So what do we do? He said, so there's this cool thing called write barriers, they are super effective. You know it because it says so on the screen. Basically, for a lot of complicated reasons, there are insufficient write barriers in C Ruby, and that is the thing that you should ask Kaci about, or somebody who knows much more about write barriers than I do, I am not unfortunately a write barrier scientist. But basically, to sort of circumvent this, we now have write barrier protected and write barrier unprotected objects, and what we, the plan here is okay, I'm gonna, because the pause time is relative to the number of living write barrier unprotected objects, and most objects are like user defined ones, strings, arrays, hashes, things like that, are write barrier protected, the pause time is not gonna be that bad. So what is the actual fix? If we're like, okay, this is not gonna be that tricky, we can apply this to kind of fix the problem. Essentially, after all the black and white objects are identified, right? But before we collect the white ones, we wanna go and say, okay, I'm gonna scan from all of the unprotected black objects, we can actually guarantee that the ones that are protected are managed, and just kind of do another check. Like, has anything basically changed? Have new white objects appeared since I last checked? And by doing this, we kind of say, okay, great, now I'm not gonna have this bug where I inadvertently collect objects that I shouldn't. We're not gonna have a problem with losing things that I really didn't want to allow to be collected. So that is sort of a brief history of GC in Ruby from 187 to 22. We can now talk about GC tuning, and if there's sort of, if there's one TLDR, if there's one thing you take away from this talk, it should be this slide. Do not do it. Or rather, for experts only, don't do it yet. And this is a paraphrasing of Michael Jackson, not the Michael Jackson, Michael A. Jackson, but I guess it's also A. Michael Jackson. But basically he said yes. In terms of program optimization, any kind of optimization, don't do it. And if you really know what you're doing, okay, fine, but don't do it yet. So what happens if we decided we're just going to try this? So here are a few variables that you can modify in Ruby to sort of affect how garbage collection is performed. These first two, heap growth factor and heap growth max slots, lowering either of these will trigger more frequent young object garbage collection. You're essentially saying, hey, like when I get new heaps, which we'll talk about in a second, you can, I think the default in one nine or two, it may still be 1.8, but at one time it was 1.8. So you have 10,000, then another 10,000, and then okay, it's gonna be 1.8, I'm gonna get 18,000. And this will say, as you need more memory, great, here's larger and larger chunks of memory to use. So you can lower this, and then you run out of memory faster and you are forced to collect the same thing with the growth max slots. The latter three, the malloc limit, the malloc limit max and the limit growth factor, lowering any of these three will tell Ruby that it's not allowed to allocate as much off-heat memory before running minor GC, which also triggers more frequent object allocations or collections rather. And you can sort of do this with old malloc, which is, I think the name stays for larger, like major GC, but this is probably a bad idea because as we've seen, if you start triggering a lot of major garbage collection, you're going to be sitting there collecting garbage instead of sending emails to your users or something of that nature. The important thing to realize is this is not a silver bullet. There are no silver bullets. There are a lot of werewolves. There are no silver bullets. It's a bad time. So there's, you know, this is also not as helpful as it may seem. There will be times where you say, yes, there's a lot of objects, there's a lot of stuff that's going, we're churning and garbage collection is a problem. We've identified this as kind of a thing that we should do. You can inadvertently make things worse for yourself by modifying these variables and not doing a lot of measuring and being very careful, right? This is also one of those myths. It's very attractive. It's the source of those like weird old tricks and tips about XYZ. There's a lot of mythology around things like bit twiddling and garbage collection. And I advise you that if you do decide, yes, I'm gonna tune the garbage collector. This is something you just wanna measure a lot beforehand, after, make sure that what you're measuring is actually what you're measuring because oftentimes that is not the case. Just kind of proceed with caution. So that said, we're gonna move on to, I think the sort of the bulk, which is this case study part three. So a couple of years ago, I was working at a company and we had a Ruby application. It was interesting in so far. It was not a Rails application. It was a suite of seven Sinatra applications that had been sort of smashed together in interesting ways. And the way that this worked is, users would browse to the site, they would go to the web layer and the Ruby application would make requests to a number of Java services. The Java services all spoke JSON. So the Java service would say, hey, I have some JSON for you and Ruby would say, awesome, that's great. I love JSON. And would kind of inflate them into hashes and then pass them around. So you would just have these huge hashes floating around and people would just start picking stuff out of them or mutating them because that's the thing you can do. And it was a bad time. And so someone got this really crazy idea and said, why don't we take some objects and orient our code around them? And I suspect this is a fad, but we were like, all right, we'll try it because this hash thing is not working out. And now you see why earlier there was a hash and not a person that got nude up because hashes are a thing. So anyway, we were talking about doing this and we started doing it and performance tanked. We were constantly running out of memory. We were sluggish. We were having all sorts of issues in New Relic. Everyone was having a hard time. The business was yelling at us. We were yelling at each other and we couldn't figure out why when we decided to write object-oriented Ruby, we shot ourselves in the foot. So let's talk about memory. And again, this is all in the context of 193. At the time we were running Ruby 193 and we were sort of trying to figure out what's going on and this is sort of how the switch to two happened. So the memory model looks something like this. Ruby objects are 40-byte R value structures which Ruby allocates into heaps of 16 kilobytes each. This is for a 64-bit architecture. I suppose for 32-bit it's gonna be somewhere like 20 or 24, but we'll say for argument's sake that we're all using 64-bit machines. So you're gonna get somewhere on the order of 400 Ruby objects per heap because you have 16,000 bytes divided among, you have 40 bytes per R value structure. And Ruby will give you about 150 heaps to start, which is awesome. And so you can kind of test this yourself here. We're inside the Sinatra app and you can see the object space. There's somewhere, there's about 408 objects per heap. So in that 400 ballpark. And it turned out we were making a huge, this is a little bit hard to read, I apologize, a huge number of objects. So at the top you see BXR console, it's just my alias for bundle exec rake. The console task starts IRB with the application loaded. We say, okay, GC start, great, and then GC stat. And the interesting things in here are we see the number of total GC runs. We see the number of heaps with at least one used slot. The heap length is the total number of heaps and heap increment is how many more heaps to ask for, which I think in this version of Ruby was 1.8 times the previous number. But the most interesting thing is this, the heap live number, they were half a million objects just to start. We weren't even doing anything. We just started the web server and said, hey, how many objects are there? It's half a million. I would say that for comparison, there's an average Rails app would be somewhere closer to 400,000, but it's sort of a nonsense statistic, right? Like there's no average Rails app. We all have very different business names. We have different versions of Rails. We're running different versions of Ruby or different Ruby implementations. Different web servers like Puma or Unicorn or Thin. But this seemed completely wrong for this Sinatra app or this like smashed together Frankenstein's Monster Sinatra app. And we weren't even using ActiveRecord. We were just talking to the services and getting responses. So let's talk a little bit about these R values. For small values, we restore them directly on the object. And you might have heard via Pat Shaughnessy, who has an excellent book, Ruby under Her Microscope, that there's, and this is very interesting. Again, it's one of those weird old tips, right? It's like, you don't want strings that are over 23 characters. This is crazy. It doesn't matter. If you actually really do care about that level of performance and optimization, please come find me after the talk because I have a tremendous amount to learn from you. That would be awesome. But essentially, what happens is, you have 23 characters, you have the value directly 24 and larger. You have a pointer to somewhere, other location, memory, and this is somewhat slower. So for large values, whether it's a string, or an array, or a hash, whatever, the value is actually a pointer. So the R value itself has a flags field, which contains FL mark, which we'll talk about. Object contents, which is the value, or it's either an actual object or it's a pointer to one, or you have next, which is a pointer to the next R struct. And there's R string for strings, there's R hash for hashes, R object for custom objects in user space, and these are all the same size. So we talked about heaps. So in 193, we were seeing we had 10,000 slots, which was one heap. We got second heap, which was another 10,000 slots. And every time we needed another heap, we were multiplying by 1.8. So the third time, Ruby says, hey, I need more memory. And great, we get another 18,000. So now we have 38,000 after the initial 20,000. And then if we have to ask again, we go from 38,000 to 106,400 because we were doing this kind of doubling almost this 1.8 factor. And this could be tricky if, and this is a case for tuning, and this is one thing that we did do, because what if you say, I really, really need like 50,000 objects. And so I asked for, or 50,000 slots rather, and I asked for more, and now I have 106,400. And I'm never actually going to use these. And you realize, Ruby is not going to give this back to the operating system until the process exits. So asking for more memory than you need can be tricky and there's a case to be made for doing the kind of tuning I warned you about with that regard. So let's talk more about market sweep. So Ruby heaps comprising linked lists of R values, linked lists were in fact invented by Alexander Graham Link in 1836. I realized that if you didn't see Aaron's talk, this just sounds like crazy stuff, so I'm sorry about that. So when there are no more free R values, Ruby 1.9 will set FL mark on all active Ruby objects. This is the marking phase. And then relinks all the active objects, kind of sweeps them into a single linked list. And this is called the free list. Here's the free list. You can kind of imagine it in your mind. You have a linked list, marked objects, some are not marked and the ones that are not marked are available to be reclaimed. So they get swept and used again. I just love that animation anyway. So copy on write. We talked about this a little bit ago. Copy on write, the idea is when your production process or any process calls fork, the new child process shares all memory with the parent and then copies are only made when a write is forced, which is cool, this is great. You don't actually have to write anything unless there's something different. And so the fork process and the parent, as long as you're not mutating stuff, they can share memory and it's, you have this sort of persistent data structure and it's really cool. The problem is if you mark an object directly and say you are marked, you are available and then there's a child process somewhere that object is not marked. So you have marked and not marked. You have these objects that sort of proliferate that only differ in their eligibility for collection and this is bad. You kind of get a proliferation of objects, this kind of subversion of native copy on write and it's not necessarily true in all places on Linux boxes in production and on your Linux machine or your OSX machine. They do leverage copy on write. We'll assume that it's true everywhere. There are some instances where it's not. But basically now we have all these objects floating around and for a web server like Unicorn that does concurrency via forking, the more Unicorns you have, the worse your problem gets because you're constantly writing new objects. So this is the erudite portion of the presentation. You can tell your friends and colleagues and coworkers that you learned something from my talk, which is a quote from Shakespeare. So this is from the Tempest, the idea being, okay, we don't need to do all these writes. We don't need to have all this, so let's not. And so this leads us to memory and bitmap marking as mentioned before, maybe Ruby 2.0. So every heap now has a header that points to a bitmap and you can kind of, it's a little hard to see, but these ones and zeros correspond to marked or not marked in the actual heap slots. So you no longer have to mark an object itself as marked. You can just update the bitmap and that bitmap keeps track of who is available for collection and who is not. So this header is, one is marked and zero is unmarked. And so Ruby 2.0 with this header, just kind of simplifies this and allows us to actually leverage copy on write that you see marked no longer modifies objects and we only have one object. If it doesn't change in terms of its eligibility for collection, we don't have to write it among a bunch of different processes. We just have the one. And as mentioned, because Unicorn manages everything by forking, the more Unicorns you have, the worse the problem gets. Now this is not a bad thing. Unicorn is a popular Ruby web server. Forking is not intrinsically bad. Unicorn is not intrinsically bad. Ruby is certainly not intrinsically bad. Ruby 193's garbage collection algorithm was inadvertently doing a bad thing, which has been fixed, which is great. So as mentioned, as N increases, the problem gets worse. So here's some numbers. Hopefully they're somewhat illuminating. Loading the application in 193 invoked 122 GC runs and took about 4.4 seconds. And then changing nothing else, simply loading the app in two, invoked 66 GC runs and took about three seconds. So in this case, one and three, and we were looking at the GC stats and kind of instrumenting and carefully checking and there was some variation in terms of gem versions and stuff like that. We tried to get all that jiggle out and it was about 47% more time simply collecting garbage in this application. So given this information, what did we do? I was much faster than I expected. So number one is we upgraded to Ruby 2. So we could leverage copy on write. Turns out requires also faster. It's hard to see the line there. This was new at the time. It's not so new now, but there were cool features like module prepend, lazy enumerators, refinements, and there's a talk about refinements, which is awesome. Unfortunately, you are missing it, but there will be a recording, so that's good. And it was just about time. Like Ruby 1.9.3 reached end of life in February and we were talking about this, I think the previous December. So we had really a couple of months to really get wrapped up. And yeah, so that was the first thing that we did, was just make sure we were on Ruby 2. So another takeaway is update Ruby. So this Christmas you're getting Ruby 2.3 and it will be awesome. Number two is we profiled. So as I mentioned, we tried to find and eliminate sources of load. We spent a lot of time doing native GC profiling. This is Ruby 2 docs. There's also docs for 2.1 and 2.2. I encourage you to take a look at GC and sort of the methods that are available there. There's a lot of information about what Ruby is doing, much like how Aaron kind of showed in his talk, like how you can kind of pull apart and see all the instructions that Yarg is executing. You can also go through and see what objects are being allocated, how long they're alive, how many slots you have, things of that nature and it's really very cool. We also used the Ruby prof gem, which helped us and I can say, I don't have enough good things to say about it, it was awesome. And the third thing, yes, I know I said don't do it, but we were reasonably sure we knew what we were doing and it turned out, well, so far, so good. Basically, we tuned the GC. So the three variables that we touched there was the malloc limit, which controls when you perform a full GC run. The default was eight megabytes, which I think was set in something that was chosen in 1995, which made a ton of sense for 1995, but didn't make sense for us. So we tuned number one to get more memory and sort of punt on full stop the world GC because we could, you do more stuff before we had to actually do that. Number two, the heap min slots that controls the slots per heap, which defaults to 10,000, we tuned it a little bit to get more objects per heap. I wanna say we only modified it to about maybe it was 12,000, something of that nature. And then Ruby three, sorry, number three, Ruby heap slots, growth factor, as mentioned, the growth factor was 1.8, so we would ask for more and more and more. We kind of looked at the graph and realized that we were kind of plateauing a little bit and we didn't really need to do 1.8. I think we selected 1.1, 1.2. So again, lowering that in Ruby two will give you more frequent kind of GC runs for the young generation, for the minor GC, you don't have the full stop the world stuff. So some credits and some further reading. I'm deeply, deeply indebted to all these people, to Pat Shaughnessy, who has said his excellent book, Ruby Under a Microscope was very helpful in kind of pulling Ruby apart and seeing what was happening. Sam Saffron has written a great post, we demystified the Ruby GC, it's a couple of years old, but still a great read. Pat's book, Alexander Diamond has a book called Ruby Performance Optimization, which goes much more in depth into the stuff I kind of talked about at the beginning, where a lot of the issues that we see in our Ruby applications have to do with the richness of the object space and garbage collection and sort of what is around and is less related necessarily to, you know, bad algorithms or bad database queries, although those things do exist. And then again, Koichi's talk is just astounding, I encourage you to look that up as well. Blog posts on the Heroku Engineering blog, videos on YouTube, they're all excellent. So really, thanks so much for your time. I really appreciate your taking time from your RubyConf to come listen to me talk about stuff. And again, I guess obligatory shameless self promotion. This is me, this is who I work for. That's the book that I wrote. And yeah, thanks so much. I'm sorry, I don't understand the question. Oh, out of band GC, that's a great question. So that was something that we were trying to figure out for this application. So the question is out of band GC, what do I think about it? The idea is you essentially tell the garbage collector, you're not in charge of when you run. I will tell you when you're allowed to run. This is the thing that is useful if you have something like a web application where there are requests coming in and someone could be in the middle of a very complicated request and Ruby will say, hey, time out, I'm out of memory, I need to collect, I'll be right back and then just do a major garbage collection right in the middle. You don't want that to happen. So the idea is you say I'm going to only allow garbage collection in between requests. Like maybe every 10 requests, every 100 requests, every 300 requests, that's when I'm gonna do some garbage collection and then back to the show. For Unicorn, at the time the tools available were Unicorn Worker Killer and I think there was another out of band GC library that we were using that sort of allowed us to maintain memory while it's kind of sniping like big bloated old Unicorns because that would happen since we're telling GC not to run anymore. Like I will tell you when you're allowed to stop and it was a lot of overhead doing that memory management. Like you essentially give up all of the nice cool like automatic memory management that Ruby gives you and it's a lot to fit in your head. So while I think there are certainly times where it makes a ton of sense and it can be very, very useful, we found it was too hard for us to get right which does not mean it's not too hard for you and your team to get right. How long should a major garbage collection take? That is an excellent question. I feel like I don't know the answer. So major garbage collection, like certainly you don't want to be like spending several seconds collecting garbage. As Gary mentioned in his talk yesterday, like this was sort of like why he was not convinced that garbage collection was necessary because garbage collection in 2002 was taking several seconds and this was unacceptable. With the newer stuff in 2.2 and the ability to kind of incrementally do it, I feel like there's, the applications I've worked on, it was like less than a second, like more than a second. We're kind of like, oh, this is taking a long time. Like what's actually going on. But I think it's gonna have to depend on your application and your needs, yeah. Where do you, oh, the book, yes, like I said, the flashing buy now button. So you can get the book from your local bookseller. It's published by No Start to Press. It's available on Amazon. It's available at Barnes & Noble. I wish I had copies with me, but they're extremely heavy. Like it's something like 340 pages. Like it's surprisingly big. They're illustrations, so yeah, it's pretty cool. I'd be happy to show anybody like a PDF for some sample pages on my machine later today. Like I said, I wish I had a copy, but I'm not good at marketing, it turns out. Correct, yeah, so this is, yes, to be clear, that's exactly right. That's a good question. So the question is, this code is for No Start, and that's correct. So this is not something that you can use with any bookseller. If you go to nostart.com and you say I wanna buy Ruby Wizardry, there's a little box that says, do you have a promo code? And if you put in RubyConf 2015, it will be 40% off. Like I said, it's for kids ages eight to 12. Certainly as young as six or seven. If they're really motivated and their parents are willing to help, because it can be kind of tricky. And I suppose up to high school, but it's sort of like the phantom toll booth. Like there's a time to read it, and that time is not maybe when you're 16. But I've had adults tell me that if they liked it, so if you wanna learn Ruby, maybe it's helpful for you too. Rock on. Well again, thanks so much.