 Okay, so, my name is Evan Phoenix, and I am the Rubinius Lead Developer, and I work for Engineering, working on Rubinius, and we're going to talk about cats, no, Rubinius today. But just so you know, just so that we're all on the same footing here, I am a chronic procrastinator. So I think I added about 20 slides just in the last, you know, two hours or so. So, but that's okay. What I want to happen here, and what I like in every talk is for as things, as we kind of progress into topics, if you have questions, raise your hand. And when I get to a stopping point quickly, then we'll talk about those, we'll address those questions sort of in the moment rather than waiting towards the end so that hopefully we can kind of get in a discussion sort of throughout the presentation then. Okay, so hopefully I will be tickled pink by this question. But who is unfamiliar or has never heard of Rubinius before? All right, so two jerks and, all right, okay, so while I am, that's awesome. We had one person raise their hand. That makes me very happy. So that means that I have two seconds to do the following slide then. I was calibrating per person, right? So let's see, actually, let's see if this works. So what is Rubinius for you, sir? It's a re-implementation of 1.8. We strive to do as much in Ruby as we can and we make sense of use of testing throughout the project. Okay, we're done with that, sweet. We are metacircular-ish and currently the project is slow but getting faster. And I will kind of explain what I mean by metacircular-ish as we go on. Obviously a big huge thanks to Engine Yard. Without them, the project would not be where it is. They have very graciously allowed me to essentially work on this full time and, yeah, I mean, this would still be a hobby project. I probably would have abandoned it for, you know, a lolcat's clone by now. So now that that's done, we're all sort of on the same footing, we all know what Rubinius is and all of us but this gentleman are sort of familiar with where it's been. So we're going to go over really the state of things, where things are. I've given a lot of presentations about Rubinius before, about what it is, sort of features of it. Today, what I'm really going to be talking about is the thing that we've been working on for the last few months, which is really the new virtual machine that we've been working on writing, like I said, I think we've been working on it for the last six months or so. We just recently switched over and made it our main branch so that that's the branch that everybody is using now, which is a big step because that means that we've gotten it back to the really close level of completeness that we were at before. And if you followed me or you followed sort of the discussion about this new branch, you know that one decision we made was to write it in C++. And I've gotten flack for that, that's fine, I would have given myself flack. Because it is a sharp tool, but the decision, hopefully we're going to, I'm going to talk the next probably 20 slides about that decision, about why we made that decision and why I think it was actually a good decision. The three big things that it gives us that we didn't have before, and partially this is because we're sort of starting from scratch and taking what we had learned before and moving it forward is these three things, the type safety organization and architecture. I'm going to spend a lot, the next few slides are about type safety and then kind of the remainder are about the last two topics. So what is type safety? Well, that's when it actually is a duck. That doesn't just sound like a duck or quack like a duck, you test that DNA and it's a duck. So what we're going to do for these next few slides is go over what we had before. So the previous VM was implemented in C, we called it shotgun, just because it, I don't know, I gave it a stupid name. And the following code is sort of close, but not exactly what we would have before. So this is an example of some code that you would see in the old VM. I'm going to kind of let you read this over. Essentially, the idea was given some string object, you just want to print out how big that string is. Now this code is a minefield. It is rife with the opportunity to spoil a otherwise wonderful day. The first place is right here, clow. Now what happens in this case if our self argument, which is passed in, was not a string and we asked it to get its size? Well, all of a sudden, get size is expecting a string and it's just going to seg fault on you right there. It's right out of the gate, spoiled your day. And oh no, there's another one in here too, crap. And so this is sort of what we, the scenario that we had in the old VM where we didn't really have the ability to express the types of things in this case. Everything was an object and you just sort of hoped and prayed that the person knew what they were doing when they passed this in. It was sort of programming by prayer, instead of programming by contract. And there were ways to solve this. And we did solve it in many different cases in specific places we would solve it. This is sort of, this isn't how we solve it. This is a very quick example of, okay, well fine, just put guards in there, right? All right, we'll make sure that it's a string and then you get the size back and make sure that that size thing is a fix num and then do the operation. Well, if you're going to have to go through the whole everything, the entire code base and add these guards, you're going to have more guards than you're going to have actual code in the code base at that point. Just because the ability to express what was going on is completely lost. And this was a really big problem because it made the code incredibly hard to really read, to really get a feel on what was going on and where potentially you'd have problems with things. And so let's do the same thing but in the new system. So this is C++ code and it's considerably more straightforward, especially coming from with an object oriented background here. Now we can see that the tiniest primer of C++ that string is a subclass of object and it's got an instance variable basically called size. And there's a method, a class or a instance method for string called size int that goes through and calls a two int method on the size. And already we can see much more straightforward about the ability to organize that code. We're not cluttering up the code with guards or calling things as methods rather than as functions that have long prefix names. So already the code feels a lot more like something that we're used to. And in addition, we get things, too bad. In addition, we get things like we can declare actual types for things. So we see here that we've actually said that size is going to be a fixed none. And so we can bank on that to let the compiler at the front end do a lot more of our work of making sure that things are correct. Now, there are times though where you're gonna have to take, say an object, you're gonna have to bring it in. You're not gonna know what thing it is. And now in C, you would just do a cast. You'd say, this thing, I know that it needs to be a string. I'm just gonna sort of cast it as such. So what we did is we added basically the ability to do safe casts to the system. So what we can do is we can do something like this. Where this get an object returns just something, just object. Again, this class names have the same hierarchy as they do in Ruby. And so what we did is we added these functions as try as and kind of that work just the way that you'd expect them to. Which is, as you can see here, as I'm asking object to be a string. And if it doesn't work out, it's gonna raise an exception. And this try as if it doesn't work out, it's gonna set it to null. So that you can actually do that as a conditional. And again, kind of, which is the same way. So what we've done then is we have taken this sort of model of where we had no types before. Everything was object. And we've added instead, now we've annotated everything with the ability to say, no, I need a string here, I need an array here. And then we've built, once we have that, we've allowed you to kind of escape the type system with casts that are safe. So that you want a string, okay, great, say that you want something as a string. And if it doesn't work out, great, no problem, it'll just raise an exception. I should note, I didn't put this in here that when you use these casts, like if you use as, and it raises a C++ exception called type error. That actually gets translated into a Ruby exception, a type error Ruby exception. So if you're going along in your VM code happens to do something where you passed in some, say that, get an object, return an array. And the code didn't really work out. You would get a type error back in Ruby land that would say, by the way, the VM couldn't figure out what was going on, you'd get a type error. So that in the previous, the previous VM, you'd actually get a crash, which would make it significantly harder to debug. So the big thing that hopefully you guys can see here is that now, where now we have this ability to have the, the actual class hierarchy inside the VM mirror the class hierarchy that's in Ruby. So we've drastically reduced the sort of cog, cogn, cogn, the dissidents between, between the VM and the kernel. And that has proved actually to be a really big benefit. Because now, as you're looking at the VM code, it's much easier to understand what's going on because you're already familiar with how those things would work in Ruby land. And we've just coded them to work the same inside the VM. Obviously, we, not all of them, but the methods that we do. So that's type safety. And that, that has, that has spared us a lot of, of, of grief. You know, early on in the, in the project, early on in the project, there was a lot of, of debugging primarily by myself that had to do with, oh no, I didn't expect this to happen and it crashes because I'm trying to use, like I said, a number as a string or something like that. And now we get these very nice way of, of both annotating what we, our expectations for the code and also making sure that when those go wrong, we get very nice way, very graceful failures. Yes. Hmm. It's a good question. So the question was, does this, the aspect of type safety in the VM indicate it would be nice to have some kind of type safety in the, the entire G of Ruby? I actually, I don't, I don't think so. Mainly because I consider the VM sort of the, really the heavy lifting part of, of the system. And I, I, I really like the fact that when, that it is going to do all of those kind of yucky bits having to do with really needing to know the types. And not needing to know the types is a luxury. And that's really why we're building, but that's why we're trying to build Ruby. That's why we're trying to build an, an abstraction and infrastructure to get to that point. So that you don't have to have the type safety parts. And so I don't, I don't necessarily think so. I mean, obviously the VM needs the type safety because at some point you need to, you know, it, it, it needs to not just act like a string, it needs to be a string. And that's kind of where the rubber meets the road. But no, I guess, I guess I don't. So, in the back. Doesn't that apply to some future world? You're going to have to speak up. That applies to some future world where you might not like VM in Ruby, like to make it work, work, work, and then you want to have the type safety for Ruby. Was that a question or was that a statement? I couldn't, I can't tell from here. So I, it seems like it goes to make the Ruby a small Ruby over time. Right. So that's kind of, so the, the, so the question was, how does this play in with the idea of eventually making more of the VM written in Ruby? And this is why I kind of called that metacircular-ish. You know, we, obviously we try to write a lot in Ruby, but there are, there are times, you know, like I'm basically explaining the parts that we didn't write in Ruby right now. And so as we get closer down the road to this idea of, what if, what if you, what if we're writing Ruby code that's in some way being translated, I guess, to some, you know, like C plus plus or C or whatever it might be, what would happen then? And the, my answer would be who the fuck knows? I mean, I'm not, we're not at that juncture yet. So I'm not going to solve that problem until we even fathom it. Yes. Sure, if everybody here, so you can possibly translate it. Sure. We're not believe, we're not doing anything extra with the type safety system. What we're doing is taking Ruby's type safety and bringing it down to the C plus plus level, throwing it up or raising a type error is Ruby's type safety. It doesn't say fault, it's, it raises error. And that's all that we're doing. We're just bringing that to the C plus plus. I think that's a better way. Sure. Yeah, so what Arrow was saying is, and that's a good way to explain it, that the idea is to, is to say, well, in Ruby, when you get something wrong, you get a type error. So let's extend that same idea to the VM to say, you know, Ruby already has the ability, you know, there are methods in Ruby that do not duck type. There are a number of string methods that say, I need the argument to be a string. And if it's not a string, I will raise a type error. And in that same way, we've extended that concept all the way through to the, all the way through the VM now. So that at any point in the actual VM code where you're manipulating something, if that types don't match up, you now get this type error so that it's sort of extended throughout the system. So let's move on. So the next thing I want to talk about here is exporting methods. So a big part of the VM is not just the ability to, you know, to run the code, is the exporting of what we call these primitive operations, primitives. And they do things, there's sort of two spheres of primitives. The first is things that you just can't do in Ruby. A good example is fix num add. You, you want to add three to four. There's, you have to at some point drop down to a lower layer to actually perform the integer addition. And so that's a place where you have to have a primitive for that. The other realm of primitives is things where we have sort of isolated these, you know, typically they're tight loops. A lot of times it's copying data that we want to, we want to keep fast. And so we'll implement those as primitives. So there's these two kind of levels of where we want to have the VM provide us with the ability to run specific code. We kind of consider it sort of named code. And I'll get to why that is in a sec. So we'll go back to size. And this time we're just doing size, sort of size int. And this is what it looks like inside the VM now to create a primitive. Before there was this long complicated process of adding things to lists and updating 12 files. And that's, those are, those days are gone. Now this is all you need. And we have a process now in our build that will go through and find all these markers and automatically hook all this up on the VM side. And so now what we can do on the Ruby side is we just do something like this. And as you can see the syntax for it essentially matches directly. So that now we've exported this method in the VM as a name, string size. And then we've attached it inside Ruby. So now we have created this very simple bridge between the two sides of things. So to pull it back in, now what we've also got is to add type safety on top of that. So we've got size, let's add another primitive this time string add. Okay, great, same thing. This time it takes an argument, no big deal. So we hooked that up in Ruby. Again, taking an argument. But one thing that, like I said before, is that what happens when we add this type safety into it? We say, oh, hey, we've got a specific type here. Whereas in the old VM, this would have just been object other. Now we've actually said, no, no, no, I need a string here. Don't pass me anything else. Don't pass me a number, don't pass me an array. I need absolutely a string here. And so what happens when you do this? Well, it behaves exactly as you would expect it to, which is you get a nice type error. So now the primitives are able to declare their types and the system will enforce all of that for you so that we've eliminated essentially the burden of having to maintain the primitive, maintain this sort of master list of primitives and do all your setup, before there was a lot of setup code that had to do an improvement. If you had to take things off the stack and make sure they're types, that's all gone. Now that is basically handle all for you directly. So any questions on that section? Let's move on to the next one. Josh? Let's try this stuff. Yeah, this one? No, that's the one. See what's with it? Yeah, you, um, so is that this? You know, co-holding, grouping, dot primitive, and then you call it a string size. And so you flatten out the space of the class hierarchy and the methods on them into just, you know, a bunch of scales or symbols. Yes. So if you start in one world where you have a hierarchy, which has some pretty structure to it, you know, it's never even with a bunch of methods in places, you're going into another space that's a local hierarchy, but to transition from one to the other, it's a completely flat hierarchy. And did you think about how to retain the hierarchy using that company? The flattening, I mean, is something in this desire? Yeah, so I'll repeat the question a little fast. So the question was, it seems funny, you know, I've come, I've explained a lot about how the VM and the RELAND both share these nice hierarchies. And essentially to export an operation from the VM to Ruby, we are flattening it into this one namespace with symbols with names and that kind of thing. And have we thought about how to sort of integrate that? Well, you know, the team and I talked about this a lot. When I was first working on this system of automatically detecting primitives and exporting them, there was initially this discussion of, well, what if instead, you know, what if this, you know, instead of, I think the first, like maybe day two, I actually had this set up so that you just did Ruby.primitive and you didn't even put a name there. And what it did was actually, it generated some code that when the VM booted, it would actually go through and it would add a method called add that did X, Y, and Z to actually call this thing. And the reason that we decided not to do that and I'll explain that first is that essentially couples the VM directly to the names of, see, how do I explain this? It's pro, okay. Well, there's that. So, it's pro, so, okay, the boot process is kind of involved, is involved in this decision. So, when the VM first boots up, we actually have our kernel compiled into .rbc files, which I'll talk about here in a little bit. And they're loaded one at a time and they're allowed to basically run this normal Ruby code that go through and open classes and add methods and into aliases. There's sort of a restricted subset of things that they can do because there's not very many methods available, but largely they do, they're just running as code. And if we were to go through and want to add in sort of these things beforehand, there was no real place to do that. Our boot process is essentially run all these Ruby files, these pre-compiled Ruby files, and then you're done. And the VM exits when it's done with that. So, it would have broken up the boot process into something that we hadn't done before. So, this continued to make sense, basically having them flattened and exported like this sort of made sense. I think if we were to kind of export the hierarchy, I don't really, I think there would be a lot more magic. I think that the worry was that it would be, we might obscure it even more. So, I guess that doesn't probably answer the question, but that's the best I can do. So, on to the next thing, method dispatch. So, obviously we're going through a lot of low level stuff here. So, I talked about this a couple of times, and obviously this is, the reason I bring it up yet again is because this is obviously the core of a Ruby implementation, is how given, you know, basically given a receiver and a method, how are you going to figure out what method to run? Well, we've got essentially three mechanisms right now. We've got what we call hierarchy lookup, which is kind of typically how people explain how method lookup works. You know, you, to use Dave Thomas's analogy, you start at the receiver and you go to the right for the class, and then you go up for the super classes. So, you're going to look through basically a number of hash tables. What we call internally is lookup tables to find a method, an executable object. Well, that's extremely slow. And, you know, even Alan Keyes knew that when he first did small talk, you know, right off the bat they added a global cache to basically accelerate this, because a lot of times you end up calling the same thing. So, we added the global cache, and then we've also added an inline cache. I haven't really detailed those here, and I could talk about them at the end if you guys want me to. But they essentially allow us to cache this sort of these three pieces of information. The receiver, the method name, and the eventual destination method. So, that's the first step in calling a method. Now, number two is going to be obviously the execution of it. So, the way that we have it, the system setup now is every method in the system anything that is capable of being executed by the VM is a subclass of a class called executable. And it provides this sort of function pointer that tells you what to do. So, the VM after it's looked it up is basically just gonna call this thing. And what that lets us do is we can basically just have every primitive that's in the system be one of these functions so that when you go to run a method all you're gonna do is basically go, okay, I've looked it up, go execute this thing, and the code, right off the bat the destination code that you're gonna be running is one of those primitives like size like we saw earlier, or add. So then when you wanna run a Ruby method we use sort of this sort of specialized executor to actually make it faster. Now, and I can go over this more later too. The idea is that in Ruby the distribution of methods that take arguments is actually preware falls on the simple side. So you're gonna have a lot of methods that take like one argument. You're gonna have a lot of that take none, take two, take three. You're actually gonna have fewer that takes that say like use a splat, or and fewer that use optionals. It's not obviously zero and with the advent of DSLs and stuff we see that distribution go up but it's still pretty good. So what we do is we basically accelerate those cases. Those really simple cases that we can we hook them up inside the system so that the code to be executed right off the bat is going to reflect the argument counts of those methods so that we can make it nice and fast. And obviously we have a slower sort of fallback case for all the more exotic call argument patterns. So to kind of look back on this, this is sort of the critical path of the whole VM because method dispatch is the critical path. So we're gonna go through we're gonna populate a message and then we're gonna call a resolver, what I'd called trampoline, it's just a function in here that happens to be set up so that the VM is able or that the compiler is able to compile it when it compiles out to machine code it can actually do it as a tail call so that that makes it even faster. And that resolver, all it does is that the resolver is one of those three kinds of resolvers we saw, it's a hierarchy look up it's gonna be a global cache or it's gonna be an inline cache and it's gonna try and fill in the message as best it can. Well, it's not gonna, I take that back, it's gonna fill in the message period if it doesn't fill it in, then the system crashes and then we're gonna call out to the executor. So the idea here is and I've kind of gone through this path a lot because the performance of these steps is so critical, especially in Rubinius because with most of our methods actually implemented in Ruby, we hit this path vastly more than any other implementation just to say, what's a good one? Say like string percent which is like sort of like printf, right? That in 1.8 actually uses a printf whereas ours uses a really large set of Ruby code to figure out to parse what you needed using normal string methods and to dispatch to them. So this is really critical, really critical part so that's why I'm kind of giving you guys the intro on it. So another critical part is once you've actually executed that Ruby code what's the sort of the execute, what does the framing, by framing I mean like stack frames look like? Well, what we have is we have this, these method context objects that are storing information about methods that are run and they're chained together with references. So they're what you, what Lisp originally called this idea of a spaghetti stack so you create a method context object and it essentially just points to the person who sent to it. It's not organized like a C stack, it doesn't have anything implied with memory addresses or anything like that. It just basically says, this is my sender when you're done, go back to him. And this setup is very easy to implement just right off the bat because they're just gonna be garbage collected objects that the VM knows how to manipulate and read. But obviously if we're executing, if we're running a lot of Ruby code the speed that these things can be created is critical. When you first, when we first implemented this and this is true of all the small talks too when you first implement a really naive version of this you find that something like 80% of the objects allocated in the system are just method contexts because everything you do is gonna be creating one of these objects. So you really need to figure out the fastest possible way of doing this and a huge amount of the research in VM design actually is related to this singular concept of how to create these space to represent my context information as quickly as possible and how do I get rid of it as quickly as possible? So I'm gonna explain how we do that in Rubinius. So what we do is we kind of we put them in for a lack of a better word, a ghetto. There's a special memory of that only context live in. That's the only thing that can live in this very special section of memory. And what we do is if you observe what happens you find that the execution of Ruby code even though we've not restricted ourselves to being allocated to calling ourselves on a stack it actually follows a stack for the most part. You're gonna call a method, it's gonna do something, you're gonna return, you're gonna call a method, do something, it's gonna return. So if you were to model that you'd see that it just sort of bounces up and down. And so we can exploit that pattern to make the ability to allocate and de-allocate very quickly. So what we get is we get a section of memory that looks something like this. We're just gonna start allocating from the top and we're gonna go down. And this current is where the next one will go. And then what we get though is Rubinius has the ability, and it has the ability both as called and implied to reference the context. So a good idea of where it's implied is if you create a block. If you create a block you're implying that the context that that block was created needs to be saved, right? So if you need all the locals that were there you need to know what the value of self was in that case. So you need to retain that. Even if someone returns, that needs to be remembered because those blocks will continue to live on. So what we do is we have this context bottom that says what is the lowest on this graph? The lowest context that's actually been referenced by someone that someone's created a block with typically. And what we do is we save all those and then we continue to kind of fill in the bottom. So what you get is, and this would have been a great animation, but you get essentially a stack that starts at zero and it starts to fill and then it starts to come back. It starts to fill and as things get referenced this starts to move down. The top stays here. So essentially this keeps moving down and we kind of bounce up and down until it starts to get all the way full. And then we basically just go off and we do a garbage collection. And this allows us to essentially turn this context creation into the same thing that would be used to create a stack frame in C which is essentially just adding a fixed offset to a pointer. You're just gonna say, okay, I know that this thing needs 112 bytes or whatever it might be. I'll just add that and fix it and then go on to the next thing. And so that frees our garbage collector from having to manage probably 90% of the actual method context they have created. This is another quicky little diagram that kind of shows the chain. These aren't contiguous in memory. They're just sender to sender to sender. What you can see is you get this sort of mixed mode of things. We've got normal method context for running a method. These block contexts are created to run a block. Native method contexts are created when you're actually calling out to C code which we're gonna talk about next. So this ends up being fairly linear and we exploit that to make it fast. So what we've got these native method contexts in there, so what are those? Let's talk about those for a second. So early on in the project, the decision was made that, well, we're writing this in C. It would be a real shame if we couldn't run all of those extensions that are out there today. And so we decided we really should try as much as possible to be able to run all those extensions. So again, we're talking about this little method context here. And the reason is that C, especially for a lot of things continues to be the standard. You see a lot of really important pieces of functionality in the Ruby community implemented as extensions. There are currently 138 extension gems and those are things, these are a few of the quickies. If you could imagine not having access to these gems, that's what we're talking about. That would suck. And so we've got this great body of work that we wanna make available. So what we've done is to make a process where all you need to do is recompile that extension in the presence of Rubinius essentially and then we can use it. Obviously this is not complete but we're working towards that. In order to make that happen with the way that the rest of the system executes, we have a few little pieces. I didn't talk about this but the garbage collector inside Rubinius is a generational garbage collector. Koichi talked about this a little bit as well where in order to actually implement it, you need to know every time an object is written to and you have to allow, well you don't have to, but it's pretty important that you allow for the ability to move an object. So if an object was over here, you'd really like the ability to compact memory and to move objects around. And so the C doesn't really like that. And so what we've done is we've given, we have to add sort of an abstraction layer on top of it so the extensions can run. That's where we get this indirection from. So that's a little tricky. We use a stack technique that allows us to run code actually in a different section of memory so that we can run the extensions sort of in a side, sort of by themselves, sort of segregated and they can do whatever work they need and they can actually contain any kind of context information, they can use normal C locals and that kind of thing and they can call back into Ruby code and the Ruby code can return back to it and it doesn't necessarily even know that things have happened. So that is a bit tricky. And one thing that we have is this ability to recover from a segfault. So obviously you have to be really careful when you recover from a segfault because you're not really sure what segfaulted, but it's actually pretty cool and this is kind of an older example but this is an example of what I mean. We're able to do stuff like if this code up at the top obviously is gonna try and read from the address four which is very unlikely that you're able to read that and you get this nice memory segmentation error inside Rubyland. And when you're debugging your extension and you're not sure what's going on or why the interpreter just crashed being able to have these errors that say, oh, by the way, your extension test.c crashed here is actually really nice. One thing that we've thought about eventually providing is the ability to have backtraces inside this code but that's not there yet. So I only have a few minutes left so I'm gonna skip this section and I'm gonna talk a little bit about performance because it got brought up during Koichi's talk specifically about Rubinius and so thank you Koichi for your kind words. So this is sort of the reality of the situation. We're doing really well on the micro benchmarks things like garbage collection and method dispatch. I'm extremely pleased with the way that those things have been going. We've been spending time on specifically method dispatch. That's kind of the section that I talked about before. We're doing okay on the macro benchmarks. We're sort of hit or miss. It depends on what is being benchmarked. A lot of the macro benchmarks do things like they ended up calling some string method that we happened to implement in a very naive way and all of a sudden our performance just goes through the floor. So it's a little bit hit or miss. And on the mega benchmarks, if you will, I would consider mega benchmarks something like running a large application. We're pretty poor. And I'm gonna tell you why sort of in my solutions part. So how are we gonna make it fast? As Koichi-san said, it's a slow kind of curve as we move up and the idea behind the projects has always been if we have put all of this pain on the developers of the project, in order to make the project even decently fast, we have to make all of this Ruby code that we use as the kernel fast. That means if we can make that code the speed that 1.8 was, then you all of a sudden you've accelerated every piece of Ruby code that you might be able to run. And so I think that's the big benefit there. So we're sort of exploring a bunch of different options. And this is just a couple. We've been doing a lot of recent research into how we can use LVM. Again, Koichi talked about this in context of YARV a little bit. It gives us sort of a high level abstraction for the ability to be generating machine code for Ruby methods, which is obviously going to be higher performance than using an interpreter. And we're looking into both the ability to do that on the fly so that as methods are added in, we're generating machine code. And also sort of in recent days, the ability to do this sort of upfront. So perhaps the kernel is all pre-compiled into machine code and therefore that gives us sort of a temporary leg up while we are able to make the rest of the code fast. The other one that I kind of mentioned earlier is this idea of improving algorithmic performance or efficiency here. I think that is actually going to be really big. The project is largely focused on compatibility right now. We're using the RubySpec project to really get us to a level of 1.8 compatibility. And then once we're there, then we're going to start to go back and say, okay, we're complete. Let's go back and let's say, how can we make this method faster? And working on that as an algorithmic approach and essentially refactoring and figuring out where are the things that we're spending the most time on? How can we make those fast? And maybe that's again the time where perhaps we've figured out, okay, this really should, we're doing this, you know, big O end to the 10th power craziness, right? And we can reduce it. Maybe it's that we decide, okay, we really need the ability to copy data really quickly. Let's add that as a primitive. So a lot of that is going to come out of basically continuing to push on the boundaries of what we have, how we've structured that code. And I want to end with a good quote for any project that goes on for a long period of time and doesn't necessarily seem to make a lot of progress, but you feel like it is.