 Okay, I felt like don't stop believe it might be a good theme for the talk. So the title of this talk, this talk's title has gone through a number of changes. Originally I was going to do a skit to start off the talk, but then I heard that the time segments were only 45 minutes, so that got scrapped. Anyway, so I decided to call it Rubinius the fourth year, or if you will, the first senior year. It's pretty amazing to think actually that this is the fourth year I've given a talk about Rubinius. I can't say that I would have ever guessed that this would be the direction that things went. But like all seniors, let's do a little recap of the glory of the last four years. Year one, freshman year. Aw man, you can't see that at all. Bummer. Alright, so we were just a little sapling in the forest then, and really the whole project was a toy, it was just kind of a fun idea. And that's really the extent of what it was. Year two, sophomore year. Now we really started going. But like all sophomores, they think they're the top of the heap. That's just how it goes with sophomores, right? If everyone remembers the title of my talk that year, that was the Rubinius 1.0 talk title. So we can see how well that's gone. So now we get on to junior year. Alright, great, the contrast is good there. Junior year we really hit our stride. We really worked a lot on compliance that year, specifically obviously working on RubySpec, but we also took some redirections that year. Some steps backwards, some steps forwards. We started redoing a bunch of stuff in C++. And year three ended up really being a lot of plotting. We worked on year three, we had a good idea of what the problem space was, and we were finally sort of moving in a trajectory that we were happy with. So now we're in year four. Year four has been largely focused, a big part of year four has been focused on us working on the JIT to really make running Ruby code fast. Anybody who was here for the last talk heard all about LLVM, and we're using that tool as well, doing lots of things like hotspot detection and type profiling, which we'll get to in a sec. The big parts here are things like method inlining and block inlining, which is pretty awesome. And year four largely we've worked a huge amount on RubySpec. We have a 93% pass rate right now. We run rake and rspec and rails. We run C extensions like sec and okagiri, MySQL, Yajil, right? Yajil, Json, Json, anyway. But really what people want to know most when I talk to them is when is Rabbinias going to be done as the open source software is ever done? But when can I use it? So really we're talking about compliance and we're talking about performance. I sort of skipped the whole beginning of this talk where I ask if anyone doesn't know what Rabbinias is. So if you don't, you can raise your hand now. So like I said, we have about a 93% rate right now of passage, and we're passing and untagging more of those specs every single day. So we're never really backsliding on any of these at this point. We do have a few implementation things that we don't do. Whoa, I did. So like we have a whole bunch of stuff in object space that we don't do yet. We don't do ID to ref mainly because we haven't bothered to implement it yet. And the how to implement it is significantly harder than the way that 1.8 or 1.9 can implement it. You could talk to Charlie and I about all the trials and tribulations of doing that. We've limited each object and we don't currently yet support continuations. But really that's the big sort of unknowns in terms of what we don't want, what kind of Ruby code we don't expect to run. Anything else is sort of fair game at this point. If you've got anything out there, I suggest that you give it a shot and you try it because there's really nothing else that we expect. If it doesn't work yet, that's something we obviously need to fix. A big part of this is the CAPI, which is the ability to use extensions that already exist. Because this has been something that we wanted the project to do since the very beginning. Like I said, internally we have SICK and it's supporting up Geary and a bunch of other things. There are a few gotchas. So this is the part of the talk where I talk a little bit about some sort of C extension best practices. Especially if you want to run them on Rubinius. Prefer a function over any of the R macros. Anyone who's written an extension will kind of know what I'm talking about. Specifically something like this. We run across this occasionally and we don't support R basic at all. So in addition, that's just one of those gotchas, something that you're going to have to be aware of. This is not a big deal, we find it rarely, but if you've got an extension you want to run on Rubinius, this is something that you're going to probably have to change. Can you see, is that green? Okay. So another one is accessing the instance variable table directly. This is not just a Rubinius problem, this is a 1.9 issue too. So if you want your extension to run on anything except for 1.8, this is something that you have to fix. Both Rubinius and 1.9 don't implement instance variables the same as 1.8, so that's just sort of how it goes. Same is true for basically anything in the RE header related to regular expressions. Rubinius and 1.9 have completely different regular expressions on 1.8, so those are sort of out of bounds in terms of what is available. Lastly, the M header file. And these are a whole bunch of sort of internals that are exposed in M that are just, we don't have available, we don't have any plans to make them available. Anything that uses them needs to be worked around in a completely different way. So the upshot of this is try your gem, try your library, try your application on Rubinius, and really report your problems. There's nothing that's stopping us from fixing whatever is breaking your application. We are happy to fix them, unless they're the things I've already talked about. So the one big thing that people always want to know about is where are we on performance? Because we have decided to implement so much of the system in Ruby itself, obviously we're sort of behind the eight ball in terms of performance in some ways, and so I need to talk about and address that a little bit. So the first part of this is what we write, so what the Ruby code is. So occasionally we run across code like this in the kernel, just because someone had written it to get past something, and there's a huge amount of obvious inefficiency here, and this just flows down to normal every day, how do I make my Ruby code faster? So looking at this, you might realize, this is kind of a silly way of getting the numbers 0 to 10. I really should probably do that, for instance. The other part that has really been where we spent a huge amount of time is how we run it. So this is the part of the talk where we really get into benchmarking, looking at what can Rubinius do well, what does it do slow, how can it run better, right? And with benchmarking there's always expectations versus reality, looking at the results of benchmarking and trying to figure out what it means, and that means that there's always lies, stand-lies, and stats. So let's look at some benchmarks. Those came out okay. Okay, so this is the... You can't really see the numbers on the side, so that's okay. So this is a scatter plot of running a set of benchmarks, and the higher up the dot is, how many times faster it is than 1.8. So you see, oh, I could really use a laser pointer. You see that we have an outlier way up in the top, which makes this graph kind of hard to read, so let's get my man with the laser pointer. So let's get rid of that dot. So I've got rid of that one. Let's just compress it down so we can see it. We'll talk about that dot in a second. That dot is actually really important. So here we've got really sort of a graph, and you can see that Rubinius has got some really great ones that are really tons faster. MacRuby's doing really well, again, because we're using similar back-end technologies, that makes sense. So graphs in slides suck, so here's the summary of this, right? And you would look at this, and you would go, wow, I do. And you would probably go, if you were to just take, if I would present these, and then just sort of close the book, you might go, wow, 12 times faster, that was crazy. 12 times faster. That means that all my Ruby code will run 12, my Rails app is 12 times faster. Hold on, hold on, buddy. 12 times faster on these specific benchmarks. So really what I'm going to talk about here is sort of benchmarking for adults. It's taking your benchmarks and looking at what they do and explaining to the reader how these benchmarks should be interpreted. And you really need context to interpret the results of these benchmarks. So what we're asking is, what did we benchmark really? So this is a summary of those benchmarks. One of them calls a method, one of them is just an empty while loop creating a block. Really what you would consider syntax elements in Ruby, the lowest sort of common denominator of functionality. And there's always the question, why just these things, specifically in the benchmark? Why are we testing these things? And the corollary to that question is, how do they translate? Like if I'm looking at how fast I can access an instance variable, how do I translate any results from that? So let me give you a quick analogy. We ran the RubyConf 5K today and the winner was somewhere, Yarko was somewhere like 16 minutes or something like that, right? So he kicked my ass, in other words. But if there's a race and let's say two people, we'll go with myself here. Yarko and I are racing okay, we're racing to three miles that direction and Yarko takes off and I go get in my car and I drive there. I win. Was I dishonest in that? No. The rules in this particular case did not specify how to get to that point. Now you might say, oh well, it was implied that this was a running race or something like that. But there's context there. We both had our running shoes on and I'm wearing running shorts and I'm eating a power bar and I run to my car. Obviously I understood the context and I have decided to cheat. But let's say I walk outside and we're both standing next to our cars and it says let's have a race over there and Yarko takes off on foot. I might expect contextually that this is a car race, right? So it's a question of context. It's a question of you're doing these activities, you need to know how to interpret the results because it matters how you got there. And in the Rubinius case, Ruby performance, the performance in those benchmarks that we previously did, begats core performance. By core performance, I mean things like array, things like string, those core classes. Because we implement them all in Ruby, the functionality of those translates in some form to how the rest of the things say how string append works or how array map works because array map is implemented in terms of those lower level features. So then you ask the question, well, the 1A performance here begats what? Well, because in all of those cases, all of those implementations implement most of their core in some other language other than Ruby, there's no way to take what's there, those results and extrapolate them to really anything about most of the existing core Ruby classes and how they would perform. Really what we need is we need a lot more benchmarks, we need a lot more data, we need basically a better way of looking at those in a more honest way of interpreting the results. So these are a few ones that test rather than those low level features, we're going to look at specifically a few methods on some core classes. So here I'm going to append this array 100 times and you probably can't see it at the bottom, I actually ran the benchmark 300,000 times to get these results otherwise it's 7 milliseconds. So here was the results and great results, really happy with these, obviously in terms of array append in Rubinius, doing really well. We're all very sort of in the mid grade of performance there. Largely being the outlier because those while loops and that kind of thing in Ruby becomes very slow whereas everyone else can lower that to some kind of lower level construct. So we'll look at another one. So in this case hash set. So we're going to create hash and we're going to just fill it with elements, we do this 100,000 times. And now we see sort of more of what's going on in the system, we're getting a better picture about what goes on. In every other case here except for Rubinius the hash set method is not implemented in Ruby. So in 1,8 and 1,9 it's not surprising to see that they almost have equal performance. They have almost equal implementations of hash equals. Mac Ruby having a completely different one has whatever the performance of the core foundation class. Again Java obviously is using some internal hash structure that the AVM has optimized the hell out of. And in the case of Rubinius we see that really it's a pure running of Ruby code. So in this benchmark what we have benchmarked is C code versus Ruby code almost entirely. So that's nice but what you say in this is that's actually pretty good for Rubinius. We're currently almost only twice as slow as C code for running this benchmark. So let's look at another one. So in this case there would be a hash access. Significantly less work than doing a hash write. Lots of less memory used all that kind of stuff. And again pretty good results for Rubinius. Again 1,8 is spending a lot of time because of the way that its evaluator works but everyone else ends up being sort of in the same ballpark. So we could see that again and this is a comparison of C code 1,8,1,9, Mac Ruby Java in JRuby to pure Ruby code running in Rubinius. So in this case if I were less, if I had less integrity about the results of these benchmarks I could say that Rubinius can run Ruby code faster than MRI can run C code. If I were to just taste the results of these, take these results and extrapolate from that. Obviously that's not an honest way of looking at these things because we looked at this previous one and looked at the case. So there's a huge amount of work and a huge amount of data that goes in to be able to really get an idea for the performance of these really large systems. So the conclusion here is that Rubinius is becoming extremely, extremely good at running Ruby code. If it's Ruby code that you want to be running Rubinius can do it better than almost anyone at this point. And really what we're doing is enhancing all of our systems so that our internal Ruby code that competes against everyone else's native code can really sort of pass muster. And that means that we're really, really pushing the envelope in terms of our abilities to run that code quickly. Like I said in that one benchmark clearly we're running Ruby code as fast as MRI can run C code which is just phenomenal to me. So that more Ruby but so much of the implementation in Ruby is really a burden of our own devising. I wanted to do this from the get go because this was the way that I wanted to implement the actual system. And it made us be able to be compliant much faster because we can write Ruby code much faster than we can write that C code. But obviously we've got slightly slower core performance or in some cases radically slower core performance. But for the most part we're starting to even out in terms of the overall performance of the system with a lot of those methods. Because really we're looking at, in this side, the Ruby versus C or Java in the JRuby case question. So there's, you know, I have gotten the question before well, you know, why bother writing all those core classes in Ruby? Why not if you really need the performance can we just implement them in C or C++ something that will boost the performance of those things? And I say no for many reasons. And one of the reasons is that there's a huge upside to writing everything in Ruby and really to almost force us to endure the performance problems related to running all of this Ruby code. Let's go back to that slide from before. And in this case I actually don't need Eric's help to point out where we're going next. We want to look at this data point. Because that data point is really, really interesting. And it really displays the direction that Ruby needs to go and the amazing things that we can do. So let's instead, we really need to delve into this one. So let's do, we're going to do a full analysis of this specific benchmark and really get an idea for why is that dot up there? So this is the benchmark. Kind of stupid. Like I said, a lot of these benchmarks are really silly. But really what we're doing is this benchmark was designed to see how fast could I call a method. Not doing it, doing a minimal argument checking. Nothing in there. The idea is how fast can you just do a method dispatch? Right? And this tells you something. It's hard to compare. But obviously in this case, Rubinius is doing something that is just phenomenal. So here's the summary. And in case I, in case you can't tell, 114.8 times faster. The first time that I ran these and saw it, I thought bug. For sure bug. There's no way that it was running that fast. I was just like, okay, there's a bug. I have to work on the JIT for a week because those bugs suck. And I called my wife. I was like, I'm going to have to come home late. I'm going to have to really work on this thing. So I was sure it was a bug. Turns out it was not a bug. It was a real performance characteristic of this particular benchmark. So let's look at the benchmark again and I'll show you what exactly happened. So here it is again. Again, still stupid. Hasn't changed from the last slide. Still dumb. Let me, but if I refresh you on a slide that we talked about earlier under the year four category of this idea of method inlining. So here we have it right here. So the JIT is going to kick in. So I'll walk you through sort of my thinking as I'm looking at this code and figuring out what goes on. Okay, so the JIT is going to kick in on this run method and it's going to look at you a while then it's going to see all these methods sends to M and then okay. And so it's going to go look at M. It's going to see what it does and then, oh wait, you know what? I don't need to explain to you what it does. I can actually show you what it does. I can show you what the inliner is doing in this case. And this is what it does. It compiles that method run and then every time it sees that M, it's actually going off and fetching it and it's sticking the body of M directly into there with a guard every single time. So rather than having one, two, three, four, five, six, seven, eight method sends, you have eight integer comparisons because what happens after you inline is actually that the method looks like this. And then the compiler again goes, wow, that's stupid. So you get this and all of a sudden all of that sending of methods and all that kind of stuff has gone away and really all we're benchmarking is a raw while loop. And while in this particular benchmarking case it's kind of stupid you're not going to write M with mill and all that kind of stuff but the ability to inline, the ability to see through all of that logic and to make assumptions of all that inlining in this particular case is 100, makes it 114.8 times faster. And it's that idea that is extended to hash, it's extended to array, innumerable, range, almost every single core class can be inlined into all of their collars and they can be expanded upon. They can be, all the redundancies can be reduced out. So really what you see is you see that Ruby code inline and become that much more efficient as we remove the dynamic features. So where is the performance right now? For many programs, Rubineas is much, much faster. If you've got an M that returns nil, for instance, I will blow your socks off. But consequently, there's many programs that are a little slower because we have so much more Ruby code to run to just do something simple like say a good example of something that's quite in the next slide here. Some programs are quite a bit slower. Something that's a lot slower right now is say array unpack. Something that's very platform dependent. Array unpack in Rubineas is huge. It's all of this Ruby code because the C code is literally 10 or 15 pages of C code to actually handle all the cases that unpack does. And we're doing that all in Ruby now instead of all in C. So it takes time. But as you can see, we're really starting to get there. And we're getting really confident in the level at which performance is and the level at which the compliance is. So let's talk about releases. So we did a 0.13 release last week, Brian? Last week? And the big thing about this was we started working on it about a month ago, month and a half ago, and we turned the JIT on by default. So previous to this, the JIT had been an optional feature because we weren't sure that it was solid. So we turned it on by default and we have yet to actually get anybody having a JIT crash. So the JIT is on by default today. So that makes a huge improvement. Anybody who downloads Rubineas to try it out right now might see oh my gosh, this code is all of a sudden four times faster because of that JIT. We did lots of fixes. So it's our senior year. We've got to decide what we're doing after high school. We're going to go to college, what are we going to do? We're going to go to Votek, we're going to just drop out. Things the senior has to do. So we've made the decision that we've come this far and we feel confident about it that we're going to announce today that we're doing the 1.0 RC1 and that's going to be coming out next week. Mainly because we did a release last week we need to kind of get our ducks in order. We've got a few outstanding fixes related to how you can install Rubineas and a couple of other crash bugs that we fixed in the last week that we want to get out before we do an RC1. So let me just set, again, like I did with the benchmarking. It's not a lot of fun, but the idea is we're setting expectations. I'm going to set your expectations for 1.0 RC1. The expectation is that it runs Rails 3 out of the box. Run Rails 3, you can set up a site and you can use it right there without having to twiddle things. That's our expectations for RC1. Huge performance improvements over what we've done in the past if you've ever played with Rubineas. The project has gone through such undulations in performance and functionality over the last four years that we're really at this sort of amazing plateau right before we sort of reach for the next performance gains. You'll see a lot of... You've seen a lot of performance improvements. You'll continue to see them throughout the RCs, which comes to the other thing, which is that we're going to do an RC every month from now on until 1.0 is out. And the big question was when do you feel like you're ready to run... you're ready to cut the 1.0? And that's kind of up to you guys. If we all of a sudden don't get any more bug reports and it's running all of our stuff, we're going to go ahead and release a 1.0. But if you want to be using Rubineas and it has failures, it doesn't work with your gem, I can't go out and test all the gems. So it's your responsibility to come to me and say, hey, I've got my awesome big library. It does metaprogramming to the 12th degree and it doesn't work on Rubineas right now. Get it in now so that we can get it in that 1.0 RC therefore in 1.0. So I know that it feels like I've been given the same talk for four years and to me it does too. So I'm happy that we're now really at this stage that we can talk about this and really really push the state of the Ruby performance sort of to the next level. So with that I will take any questions. Yes. What's the potential to migrate the Ruby code in Rubineas to other VN implementations? So the question was what's the implications for migrating the code from Rubineas to other implementations? Could you give me an example of an implementation you'd be thinking of? Let's say for instance instead of some of the Java code in JRuby replace that section with the Ruby code that comes from Rubineas and just that single point of maintenance. So in this case it was what if I wanted to run some of the Ruby code that's in Rubineas on top of instead of a class in JRuby or something like that. So the way that things are structured in Rubineas is that we try actually really hard to keep things segregated so that we have a bunch of different directories that contain the actual Ruby code with certain expectations for each directory. So we start off with Bootstrap and we don't expect anyone else except Rubineas to run that. And then there's common. The idea is that 90% of the code for the entire system is in common and could be used by anyone. There are certain things that are expected to exist already in order for you to run common and we haven't, it's been a while since we went through and actually made the list of okay you need to have a method called this that does that. It's basically that list is how to build your Bootstrap. And then we have all of other phases. There's a platform phase in there that's again Rubineas only and a delta which runs after common that it sort of fix ups that if you have an implementation that you want to override an existing method that was in common you can put it in delta basically. We did a lot of this work with the gemstone guys because they are using parts of the Rubineas kernel. And so we went through and sort of devised most of those directory systems. We had some of them in place but we revised them with them so that they could have a nice way of getting that code in and out. It shouldn't be too bad. If it's if you want to try it and you feel like there's a whole bunch of things that are really concluded it probably is just us being lazy and needing to go through and be like this doesn't need to be this can't be in common this needs to be over here. That happens. So we have a round of cleanup that we would need to do and we're certainly going to do that. Let's go with Josh. If we want to be testing our stuff on Rubineas what's the best way to get Rubineas and be able to run our stuff on it? Very good. Thank you. So the question was what's the best way to get Rubineas and to run your code on top of it? So you can we if you don't want to you can obviously just check out the get repo and then do configure and then rake and that will build it and then it's ready to go and then it will be sort of as bin rbx for rc1 we're going to be fixing the installer so you'll be able to install it we'll probably be turning the jit compiling LLVM in by default which doesn't happen right now so you'll be able to just again either clone it or get the the individual tar balls from the website so it should be we're working really hard to make that process as streamlined as possible it's been kind of up and down in years past but we're trying to really really get it sorted out so that it's easy I suspect that we'll probably do some kind of binary packages as well for rc1 so that it's that much easier for people to download it and try it out so Brian? The other directions the directions will be on the website and that kind of thing Yes, green shirt So you're expected to run Rails 3 that's your target for rc1 do you expect to run Rails 3 as well? Yes, so we have been just because my desk is actually next to you who does now I actually have a desk which was a really big deal but engineer moving offices I have a desk finally but I digress so I sit next to you so we've been working on Rails 3 specifically we have run Rails 2 in the past we've regressed from running it again but I don't see any reason we would run it now most of the stuff related to running Rails has to do with active support and active support hasn't changed radically between Rails 2 and Rails 3 so if there are bugs there then we want to fix that that was really to set expectations so that you know that that is the good that's the Rails 3 would be the path but if there's bugs that we don't run Rails 2 3 then I don't see why we wouldn't fix that instead of the kids so I'm going to find the poll over there oh yes all this is based on Ruby 1.8 right? Yes, it's all based on Ruby 1.8 so we're going on and on in the class for a long time oh okay, so I didn't cover versions so we we pushed forward and Rubenius is now based on on 1.8.7 as for 1.9 we have a plan that we've been talking about how to actually do it we had a couple of contributors who were working hard on actually adding all of the features in 1.9 to our kernel just in normal Ruby code so we have a plan for that my guess is that we will not do it for 1.0 but it is definitely on the roadmap of things to do we want to be sure that we don't do it in a way that sort of upsets the part of 1.8 support we don't want to all of a sudden just regress and have all these failures and all these problems related to 1.8 because we decided to work on 1.9 so we have to figure out how to do it in a fairly separated way Charlie? Interesting to talk about the JRuby side of the previous question of reusing the Ruby stuff we would love to reuse the Ruby stuff just so we can maintain much more and actually the new compiler work one of the questions our compiler guy asked us is okay now I want to start inlining things like array each can I have the IR for that code can I have the Ruby for that code or whatever and I'm like oh well that's that's a good job we can't give you that so we have an actual use case and a compiler guy who is all set to start taking more Ruby code and using it to optimize JRuby's Ruby runtime stuff too and if he's using now his compiler we can translate over it and do the same stuff on JRuby as much as possible As we're starting to hit this 1.0 release cycle stride I suspect that there will be a time where the Rubinius kernel will form some sort of standard expectations of how Ruby code is meant to function I know there's a number of people out there that actually have begun to be using Rubinius kernel as their documentation for how Ruby methods work rather than RI so that they can actually go and say like oh what does this method actually do oh it takes 5 parameters and then it does 6 different things with them based on 12 different variables so let me go look at the Ruby code in Rubinius for it instead of looking at the RI to figure out then I can actually see the logic and kind of build from there so that's definitely the cleaning up and the sort of sanitizing of that code is probably high on the list of things that we will get done in the soon term so sorry my talk wasn't so comical this year but I felt like there's lots of important things to talk about Paul, yes Do any of your benchmarks look at garbage collector performance? Yes, actually the question was do any of the benchmarks look at garbage collector performance Interestingly enough the hash write one that I talked about earlier is about 50% garbage collection because of it's run about 10,000 times with a hash making a hash with 100 keys that one is actually not almost the run time of the code is so dominated by the garbage collector in a particular case that it's kind of comical but I leave it in because it's a specific case so yes in that particular one yeah So this is probably a low priority but what are your thoughts on getting rid of the global interpreter lock? Sure, so my question was getting rid of the global interpreter lock so when we switched to running with the C-stack and running with native threads inside Rubinius just for time's sake we went ahead and used the global interpreter lock so we're basically using the kernel for multiple threads and thread switching but we're not actually letting them run concurrently sort of the previous talks about this particular issue we actually have it on the sort of to-do list it's towards the top actually it's not super difficult in Rubinius because we actually don't have nearly the amount of code in unmanaged language that other people do so it's sort of a case of really just going through and auditing most of the code base and figuring out how exactly the lock should be structured rather than having just one of them at the end of them earlier this year I actually had the distinct pleasure of talking with John Rose who's one of the sort of preeminent JVM engineers and he had been he was on the hotspot team for the JVM when they back in the early 90's when they migrated from green threads to native threads and so I kind of picked his brain about how exactly they did that the same process that we'll be going basically has to do with you start with one lock and you decide okay I want two instead how do I make just two locks over everything and you kind of move from there but it is definitely high up on the list we have to decide if I at this point it's unlikely it would be in a 1.0 release my guess is that that would be something that would be in a major release the major release that comes right after 1.0 oh man I really hope that it does not take as long to get 1.0 out or 2.0 out that it takes 1.0 to come out so otherwise yeah I'm going in and gardening Bill what's your ratio of ruby to unmanaged code that you call it what is the ratio of ruby to unmanaged code so we can take a look keyboard listen to some journeys again okay so how's that okay so this is sort of the structure so that is you can't see that 28,000 lines or so of ruby code that's in the kernel so that's so there's about 7,000 lines for the bytecode compiler this is riveting I know watching me type okay so it's about 50-50 yeah a lot of this is there are some things in here that we could clearly make go revert and go back to having a ruby version stuff that we haven't really done an extensive audit of a lot of these are things like how the method tables work and all that kind of stuff stuff that you unfortunately really can't have in ruby unless we were to do some sort of translation from ruby to see ahead of time which is sort of a whole mother while the wax anyone else yes I'm curious what what your process is for getting any of the features that you don't have like the call CC you know what your determination is yeah so the question was how are we going to add the features, so how will we add features that we don't have right now so there's sort of two different kinds there's the features to wax from Felde and there are features we know we don't have and there are features we don't know we don't have um and then there are features so the features we know we don't have are split into sort of two subgroups there's stuff that we really need to do now in which case those things are almost always sort of the top of the queue like we don't have this implement we don't have that implement so we're going to do those things right now call CC and the object space stuff are lower because simply they're not used in a significant amount of code and they don't have a big impact on our completeness obviously those things that we don't know we don't have are philosophical so I mean we'll get to them as you find them we have to know them to do them for anyone else ok thank you