 The idea that we've got here is each of the implementation teams is going to get a little bit of time to do a quick introduction to their project, status, aim, maybe a little bit about futures. Kevin, working on Cardinal, is going to get a longer section. John Lam, working on Ruby CLR, is going to get a shorter section because he's talking again tomorrow. Evan gets a longer section and then Charles and Tom, down on the end, get a shorter section because they're also talking tomorrow. Once they get through their presentations, we'll turn the time over to you guys to kind of drive and ask for questions and get answers from the full panel. So with that, why don't we just work down. Kevin, we'll let you go first. So for the past two years, I've been following Parrot to some degree, actually pretty closely, and I've worked on a couple different projects. The last one I started was Cardinal. Cardinal today can do simple full control, simple functions, and it's about right there. We haven't gone much further than that, mostly because I've diverted my attention for the past, I guess three or four months to kind of helping Parrot finish up. Parrot has an object system in place, but it's really a bootstrap object system and we're kind of redoing it. You can imagine implementing Ruby without an object system is a little difficult. That's where Cardinal stands today, but I'm just very interested in virtual machines and compatibility and interop between languages, and so that's why my interest lies with Cardinal and Parrot specifically. Parrot has some really cool things going for it. It has a jit and good garbage collection ready to go. It seems to be polished, but those are nice things to have ready for you. It has an interoperating calling convention, so it allows Python, Perl, Ruby to call back and forth between each other, and that's kind of interesting. It has a portable bytecode, and we right now run on most of the common platforms out there. Intel, Intel 684, of course the AMD stuff. We also run on Mac OS X, both Intel and PowerPC, and every once in a while we get some people running FreeBSD and some Spark stuff. That's kind of where I'll leave it for now, and we'll get back. Hi, I'm John Lamb. I talked to some of you guys before at the last RubyConf about the work I've been doing with RubyCLR. Unlike some of the other people here, I can't admit to being a language implementer, at least not for another few weeks. My primary interests here are kind of along the lines of understanding the interop problems between dynamic and static languages. One of the interesting things about the RubyCLR bridge project which I created was the fact that CLR types and Ruby types can coexist inside of your Ruby programs and look just like each other. I handle all the marshaling, the conversions and the interop stuff automatically for you, so you get this good feel. Some other kind of wacky things that you'll see like mangling the names of CLR methods. Like a lot of other libraries use naming conventions in terms of how you name your methods. CLR library naming conventions are nothing like Ruby naming conventions, but yet I allow you to call them and make your apps still look Ruby-ish while calling any of these things. There's a lot of little bits of things that you can do at the interop level such as how do I convert strings? How do I convert a mutable string to an immutable string? Those kinds of issues. Those things are taken care of as well. What's really interesting is some of the stuff that Kevin here was talking about with the dynamic language interop problem as well. If you look back at the common language runtime, similar problems kind of exist there as well because it was envisioned from the get-go to be a static language runtime that would allow multiple static languages to interoperate. You now have problems. Let's say you had a library written in C-sharp that you wanted to call from Eiffel or a library written in VB that you wanted to call from C-sharp and those kinds of problems as well. There are similar problems when you really want to have this kind of base runtime and base set of infrastructure that all of your other stuff can run on top of. One of the nice things about programming towards a platform like that is the fact that a lot of things just kind of come for free. Generally things like debugging support, which is a big issue, just kind of come for free because the platform supports debugging. As long as you're admitting the right symbol information, you can plug right into Visual Studio or WinDebug or your favorite debugger of choice and get the debugging stuff coming along for free. This is kind of a rambling thing right now. Just come some thoughts to kind of at least talk about the problems I'm really interested in getting comments and feedback about, especially in the various interop scenarios. I think those are the really tough problems. If you were to implement a language, say the Iron Python guys, if you were to implement a language, part of it is, yeah, sure, getting language semantics right, but it's the playing well with others. It's actually, I think, an even more challenging problem in getting the core language to run and to run well. I'm Evan Phoenix and I'm sort of the project lead for Rebinius, which is a brand new Ruby virtual machine that we've been building from the ground up. Its aim is to be 185 compliant and sort of it's, the big to do is that it's designed with really Ruby in mind, so only as little of it is actually written in C or any foreign language is possible. So, you know, you'll find that basically all of the core library for Ruby, all the methods on Array, all of those kinds of things are all implemented directly in Ruby and the VM itself is just a slim VM written in C that allows you to basically extend it. So the idea is to really keep it small and keep it simple and provide a VM with tools to build up the whole Ruby implementation on top of it. Goals-wise, we're going to hit 1.0 by October. Our goal is to have 1.0 out by RubyConf this year and at that point it will be 185, 100% 185 compliant and run Rails 1.2. So those are our goals. And yeah. Okay, I'm Charlie Nutter from Sun. Anybody here that doesn't know what JRuby is at this point? That's good. So I don't really need to tell you, you know, Ruby on the JVM. We're moving along as far as milestones go and the details of things. We've got our talk tomorrow, the last talk. Now I think it's been shuffled around to be the saving the best for last, right? The finale. The finale for the conference. But, you know, we're looking at 1.0 release in the next couple months. We want it to be faster than MRI, 185 release. And the last release we did officially even run Rails now and we'll have more numbers and talk about that kind of stuff tomorrow. I'm Tom Ennebo, another developer of JRuby and what more can I say? We're going to be giving a talk tomorrow which we're going to detail stuff a lot more and I'm more interested in what questions people actually have about our implementations like questions about compatibility or what applications we can run. All right. Thank you. As far as the questions go, ideally all of the questions are going to be fairly broad that can be addressed by everything. If you really want to drill into the details about a specific implementation, I'd ask that you try and catch one of the implementers during the break or one of the other breaks during the day. But if it's just a light question, yeah, we can probably handle that. Also, Mike Moore has a wireless mic that we'll try and kind of move around. If you speak up, we'll still try and recap the questions so that we can at least catch it for video and audio. With that, Carl. Yeah, I just wanted to see if you guys could comment generally on the performance of your implementations. I'll start. Did anybody see the Ruby shootout that was posted on a blog a couple weeks back kind of comparing Ruby 185, Ruby 19, the YARV implementation? What else? Cardinal was in there. JRuby was in there. Rubinius was in there. Ruby.net was in there as well. I think they left a couple out, but it was a pretty good cross-section. Our performance in general cases, and JRuby is probably about twice as slow as Ruby, but as I'll show tomorrow, I've got a lot of benchmarks now showing individual cases are starting to run faster at this point, even some in interpreted mode that are running faster. When we start compiling things, things generally go quite a bit faster than regular Ruby. So there's more work to do there to get the general cases all running that fast, but all of the work we're doing is generally applicable. We're not focusing on optimizing for any specific benchmarks. We've been simplifying our internals quite a bit for the last six, eight months, and we slowly see a little bit more performance out of each thing we actually removed from the interpreter, so there's still plenty of run-way. Yeah, the big challenge for us is trying to get the runtime into a shape that JVM can take over for us, because if we can get it so that Hotspot or any of the other VMs can optimize it, most people are aware of the fact that it can do pretty well and oftentimes beat C code in a lot of cases. So I guess to speak to the shoot-out, I was just happy, Runeus was listed on there, and I was just happy that it passed 60% of the tests on there, let alone was on there at all. I was really happy to make a really big red bar that made everybody else look awesome, too. That being said up to that point, even now, it is a very young project, specifically I hadn't done any performance tuning, zip NADA up to that point, because in fact I hadn't even told GCC to do any optimizations. I had basically turned off any possible optimization because it just makes it harder to debug, to work on. So since that time, I've decided that well, if people are going to start looking, it's going to be one of those public-facing statistics I should probably put a little time into it. So in the intervening week or so after that, I had, you know, I tuned it, I got GCC doing the right thing for most of the VM, and I was actually beating 185 on some of the benchmarks. Of course they're my benchmarks, so I would hope I do awesome on them. That being said, the overall goal is that, I should say 4.1.0, if we're as fast as 185 generally, I'll be happy but disappointed. You know, there's a lot of places where the architecture is a lot simpler and a lot more open for optimization than the current architecture, so even if we hit that, and it's as fast as 185 for 1.0, the sky's the limit in terms of new optimizations for making the VM faster. So I can't talk about any specifics or implementation of Ruby, but I could talk about Iron Python that runs on top of CLR, which is a similar dynamic language, and Iron Python right now is the fastest Python implementation out there. It's depending on whose benchmark you care to believe and that kind of stuff. It's somewhere in the neighborhood of 1.6 times faster than C Python. And the goal really is to make it just faster, right? It's not, because it's going to be an arms race, ultimately, to continue to look for more interesting optimizations that can happen. But I think it kind of proves out a lot of the kind of features of the CLR as a platform for trying to make stuff go fast, right? One of the most interesting features, which I'll talk a lot more about tomorrow, is this thing called a dynamic method. And dynamic methods are little chunks of code that you can compile, but that are also garbage collectible as well. So for dynamic languages, right, that's really, really important because, you know, you will generate a whole bunch of little stubs of code that at some point in time you're going to throw away things like, you know, how you would go off and cache methods, right? Method implications for future use. So you don't have to go running around looking up method names at runtime. Those are really interesting ripe opportunities for exploiting that feature to make stuff go fast as well. So, you know, it's not just to give you some idea about at least, you know, in the Python world, you know, where things stand. Yeah, so it's really early and benchmarks are deceiving, but on some benchmarks where Parrot is jitting and we write static code, Parrot is able to compete with C or beat C run times mostly compete. But I think even Java guys can say that too. Is that if you jit and you're doing very static things, there's no reason why we can't compete. On another note, what little pearl five to Parrot comparison has been made, we're seeing generally a speed up of at least twice as fast, sometimes as much as five times as fast as Pearl if it runs on Parrot. Now, there are exceptions all over the board, but the note is that jitting makes a difference and there's no reason why we can't all, I think, just, you know, deliver a good performance. Okay, great. Anyone next? Any hands? If I can try recap that. So Pat's going to repeat. Whatever. Just sort of what language features are the most difficult. Well, just to jab, because we're on a panel so we're supposed to jab a little bit. So to jab, I found continuations to be one of the easiest, but that's because the architecture that Rubenius is built on was built in such a way that everything is first class and therefore continuations took an afternoon. What is hard? What is hard is at least for me, for the most part, what's ended up being hard is really let's see, language features that you're asking about. I mean, everybody will say a dynamic dispatch is a pain, but that's a fundamental language feature, I'm not sure. One area that's been tough for us in JRuby is emulating Ruby's threading model, which allows you to do a lot of illegal things when you're actually dealing with real threads, like kill, you can't kill real threads safely. Or, I'm thinking, eval is actually a biggie in comparison because for those who are around in the 1-6 days to 1-8 days, the semantics of eval have evolved over the years and eval has all of these special cases involved where it's supposed to actually manipulate the calling environment so you have to leave things open so that eval can come back and screw stuff up under the covers because that's what it's supposed to do. Yeah, eval is an interesting challenge for those of us trying to compile things down to bytecode because we don't want to compile every single eval necessarily and have a whole bunch of wasted time spent compiling, which is the reason that in JRuby we've kind of decided that we're going to have a mixed mode interpreter, mixed mode engine. So it'll interpret sometimes, it'll jit to java bytecode sometimes, but then we get a good balance between what's really what we really want to optimize and what's just transient code. So eval definitely makes it a little more complicated. And this isn't a specific problem that we've found that it's really easy to implement a particular function and then realize like three months later that there's some really weird esoteric feature that no one's ever used in real software, but then one person did so now I have to go back and fix it and there's just so many corner cases. To speak to that there's one of those very esoteric features is what can be taken as a block argument. And if you go look in the standard library pretty much everything takes a local variable. It takes just the name of some local variable to use, but if you go look at the actual parser and the runtime, that thing that can be there is an enormous number of possibilities that no one has ever used. And to actually to make it good I know that Charles and I went back and talked to Matt's about it. He's like, yeah, that was stupid. We're going to take it out. Don't worry about it. So there's give and take already from the panel back to Matt's to move the language forward. So I've only seen this once in a test. How many people have actually explicitly passed and Neil as a block argument? Neil as a block argument? I mean like 10% Neil for a block argument. There's the one guy. There's the two guys. Be glad Zed's not here. That was a fix we had to make recently. What the hell is that piece of code? It could be that there's a great use case. It's just when I've actually looked audited software I haven't seen anything explicitly do it. That's where we've seen it. Well on that thing a little bit more on the kind of continuations kind of raffle that almost always shows up in any discussion about implementing this stuff on top of at least a static VM. Most of the tricks that people play to make that kind of stuff happen wind up using the exception mechanism to deal with flow control. The other guys on the CLR team just laugh at us anytime. Any kind of thing comes floats to try and use exceptions for flow control. But unfortunately without radical changes there's no way to make that happen. There's actually been investigations that have been done to fairly serious investigations to see whether or not this could actually happen. For the most part it's been we can't do that without compromising performance of existing languages. It's a scenario that you can't make. You've got languages that run fine on top of your platform and you can't make them run slower to support this one esoteric feature that some chunk of the community wants. What if Sharp actually uses for that? So to take another poll this is good because I can pull and find out whether or not I should even bother doing things. Who here has ever used retry inside a block? Not inside an exception handler. Inside a block. Who here knows what it does? Who here knows what it does? There's a couple. It reruns the method that called the block. The method where the block was defined. It goes all the way back out and recalls that method and reevaluates its arguments. It does this really crazy really what you would call a continuation of the method that you're already in and tries to backtrack and do it all over again. I don't think anyone... It's a bad idea. It never works. There's no good use case for it. I've never seen it work. Again, to use the standard library is not necessarily the best body of code but a fairly wide one that everyone's aware of doesn't use it. I'm not going to do it. I think something that Charles might have hinted to but maybe you're not aware of is that Ruby is an interpreter and most languages can deterministically parse. I think schedule to get cleaned up is that there are times during parse time you can't tell whether this is a local variable or a block, that type of thing until you actually start evaluating and that can be a real pain for people who are trying to actually compile the bytecode. That's kind of a legacy in the C implementations parser that doesn't need to be there. We've modified our parser to truly lexically scope those variables so we don't have that problem anymore. But that's an example of some weird things like wow. That's the interesting detail about this trying to figure out what these hard features are and how to implement them. We don't have any spec for what the behavior is supposed to be so the easiest way is to read through the way Ruby did it and do it that way. After what a year of really hitting the JRuby code hard we figured out, oh, they meant this. So we could do it the right way for our platform and not have to do it not necessarily the wrong way but the way that doesn't work well for what we're doing. But that is an interesting point. You can do things the wrong way and it's amazing how much Ruby software will continue to work. Yeah. We didn't even have lexically spilt variables for a while and nothing broke for a long time. Long time, yeah. All right, next question. I just had a general question being new to this space in a lot of spaces I guess. What other advantages does it give you using your collective products other than speed? I know we address speed a lot. What other advantages does using your individual products give to the Ruby developer or deployment? I'll start. I think interop. Whenever you start a new language the first thing you have to write after you get your language done is an Apache attachment a MySQL attachment a Postgres attachment there's just half of the wealth of languages is the library that comes with them and so I think a lot of us are trying to at least the .NET and Java guys are trying to exploit I can be the bad sheep. That's fine. I can be the black sheep. That's fine. There's one I left out. Take your guess. So I think interop is a big thing. We don't want to have to rewrite the world every time we want to do something in Ruby. I would say broader platform features like debugging support, profiling, code coverage it would be interesting to imagine what kind of static analysis tool would actually be useful for languages like Ruby. Those kinds of things are things that come for free. You could also imagine with the right type inferencing engine you might even get things like code completion slash intelligence support inside of IDEs and things. Again you don't have to reinvent the giant wheel in order to get that stuff. But for me I think the big one is getting that stuff so you can actually go off and deal with problems. Especially if let's say you deployed an application out to a customer and that thing breaks out there. What do you do with the dump that you get back from the other thing? From the customer trying to figure out what the bug was. But to speak to reinventing the world if you will at least in my case what do you want to do? Because everything being first class the virtual machine is very open to the point that if you really feel like it it's perfectly valid to go in grab the compiled method object for the method you're running right now and rewrite the bytecode mid stream if you want. If you feel like you know what I could optimize this right here. I could do it. Or say you maybe you want to write a really nice debug or say something that comes from maybe like the small talk world you're able to actually go in and say everything's first class so I'm going to go back and I'm going to actually print me out the current bytecodes that were being run at the time print me out x, y and z other things so it's really a platform because my goal is to build a platform for other people to work because I mean me and you know there's probably about 10 people in the Rubinius team right now simply won't can't fill the void of things that people need in terms of Ruby tools and so my goal is to build the tool that they can actually extend it with. Well I guess the answer to this is how many folks have worked or currently work in a place where Java is being used yeah see that's kind of the big answer is that it's really just about everywhere and I guess the other half of it is just the Java ecosystem all of the libraries basically anything you want to do there's a couple libraries for it maybe one of them is good but we've also got one of I'd say one of the two best fully functional VMs in the world. It's nice of you to say that about me but I mean you know there's been hundreds of man months man years whatever you want to measure it as put into garbage collection and jitting and thread management and everything else on the JVM so there's a lot more than just performance that we get out of it we get really a fully functional machine that we can run Ruby on. Just to speak a little more on Charlie's point how many people have a hard time getting Ruby deployed in your work environment and how many of those people have Java so okay well I actually expected that most of the people here would actually be using Ruby at Ruby so I didn't expect everyone to raise their hand. I can't use Ruby but I'm flying to a Ruby conference. How many would like Ruby to be faster? There we go. One interesting point to speak to is the deployment question as it started to grow up there's been some work and not a lot of work done in the deployment field of actual Ruby programs. I brought it up this morning about I want to write on a Ruby application how do I get it out to a customer they don't care in the same way that you still can write a program in Fortran and give it to somebody and they don't have to know you wrote it in Fortran how can we get to that point with Ruby and so those deployment questions I think everybody here is it's really on the top of there it's one of the main features about how do we really address that properly. Two maybe three if they're short questions. I wonder how your implementations deal with Unicode. We've actually retrofitted JRuby's string to sort of match regular Ruby string behavior for 1A compatibility but the political way that we have decided to support Unicode is that we will have a native version of the Rails multi-byte library by default so it will be basically the same library and we won't be forking away from regular Ruby at all but we'll be able to leverage what Java provides for Unicode. Since we can actually pull in Java classes if you really have some need to actually go out and call into Java you can just use Java string and everything else if you want to at this point but yeah trying to follow the Rails method of it as a plug-in to Ruby and then in the future when it's part of the main string API we'll just support it directly. I don't have it but I want to. Is that an invitation? Is that an invitation for someone to start writing? Absolutely. You know to that end I've actually left the door open in the implementation. The string objects will tend to something that isn't currently being used but not only do they have just in rabbinias not to get too far into details a string object is actually just has um five slots an object just has slots for the most part in rabbinias and one of those slots is basically a byte array where all the actual data is stored but the two slots there's two slots in there that aren't being used and that's the encoding and how many characters are actually in that encoding because we're cheating for the most part we just fill those in the last field is the number of bytes actually in the byte array so eventually the idea is that we'll actually move towards properly generating how many characters there are based on what that encoding field says I'm not technically implementing Parrot supports Unicode I haven't really gotten into it in Ruby I mean I think we could follow kind of what the Java guys are doing kind of fold it in on our own until but it's not that hard to make work most of the time today next at RubyConf there was talk of having a sort of a new website developed for a formal specification for Ruby and Matt was going to help out with that and I was just wondering if that ever went anywhere if there's progress it's still running on the box in my basement and it gets occasional contributions the biggest one was probably a number of folks that contributed to a binary spec for marshaling which helped us a heck of a lot trying to implement marshaling and really just trying to get more and more people on it it's at headiest.com H-E-A-D-I-U-S and then slash Ruby spec and it's just a media wiki wiki because that's what I'm used to going there and update whatever it's got a little bit of stuff on basic language things a few libraries that have a little bit of implementation detail there's a little bit of what I've gleaned from internals like how threading works how marshaling works, stuff like that but it's trying to sort of be a community spec for Ruby which is really I think the only way we can get to that point right now nobody's going to volunteer to do it and I don't know if it's time to be like what the Python folks have done where they have a foundation which actually owns all the intellectual property but it's much more formal over there in terms of how those guys do things and the foundation itself accepts and rejects things based on that and there's members and there's a panel and there's all this kind of stuff that's pretty heavyweight process but maybe I don't know if it's the right time to really kind of talk about those kinds of things to try and formalize what all that really means and where all the IP lives because you guys aren't bound by the same kind of rules of engagement that Microsoft is it's ridiculous actually how hard it is to do things that aren't Microsoft things I blame the lawyers I guess the biggest thing is that there's nobody really clamoring for a spec except the five of us on the stage and a few others around the world that are trying to do the implementations which is a very small part of the Ruby world in general most people just consider pickaxe the spec and there's so much that's not in there and for the most part there is fairly not constant communication but most of us talk on at least a weekly basis so since we're the main ones that are interested the discussion amongst the group is where most of the energy goes if we had more time I think we'd be doing it right yeah I'll take some donations to do it if anybody we can get a hat we could pass the hat all right it looks like Ed's got a question can you repeat the question what about a common benchmark suite well I think given a the benchmarks that were on the Ruby shootout benchmark blog were YARV's benchmarks which is why YARV did so awesome but I think what probably will happen is that that set of benchmarks will probably evolve over time because the Jerry Ruby guys will have their own benchmarks and I'll have my own benchmarks and CLR will have its own benchmarks and eventually we'll all want to we're basically competing for the most part so we'll all want to run each other's benchmarks eventually so I think it's just a matter of time before the individual benchmarks for an implementation become some sort of main benchmark this is actually one thing that we've kind of addressed with just unit testing and feature testing we are now running Ruby's tests Rubicon, the old Ruby test stuff that was written to develop the pickaxe book the Ruby underscore test suite BFTS Rails unit tests everything that we can find just to have some sort of comprehensive test suite and it's not coherent what's actually happening and stuff's getting retested and we're testing things multiple times different sets of libraries but that is kind of our complete test suite and as everybody develops their own independent benchmarks we can start pulling it together into a Ruby Bench project and maybe start getting some commonality there there is actually also another advertisement here the Ruby test project on Rubyforge which is an attempt to try and pull a lot of those test libraries together we've contributed a whole bunch of Jerry Ruby's tests to it we're cleaning up some of the old Rubicon stuff we were going to have BFTS it's kind of an external link now at this point trying to have this common group of people like implementers and like people who really like writing tests whoever they are actually contribute on the same pool of tests and a Ruby Bench project it's come up a couple times and it's probably the right time for it since performance is foremost on most people's mind now I'd say spec or no it's impossible to actually work on an implementation without having a reasonably large body of tests I would say like perhaps a conformance test suite would be more useful than a benchmark test suite to be honest, right? those are the interesting corner cases to get right I can just add to that since I was involved in part of some early testing stuff and we've talked about doing some combination we're also looking at using the Fire Brigade project that Eric Hodel and Zen Spider have put together if you don't know what it is it's essentially a mechanism whereby anybody can download, build and test all of the gems or any of the gems and submit pass bail information run time or running speed information etc the idea at this point is that we're going to take several of the very popular gems and use those as a common set of tests both benchmarking and conformance tests and can go back and help the gem writers make sure that they have very good coverage of those most popular gems so that reminded me because we were talking about specs and tests and stuff at RubyConf in Denver there was discussion about well, don't we have a spec really? I mean isn't the current body of work the spec so shouldn't I think what currently exists in the wild really encompassed the spec now that I think about it that's something we might get back into more if you go through rails and you go through all the most popular gems and you go through all the standard library there's an argument to be made for saying whatever is in those things is the spec and to go outside of that we need some real proof based on that and so I'm mainly talking so I remember to do it later but it's quarter after I think we can handle one short question and then we'll let go for a break and be back at 2.30 any short questions? any short questions? Error go for it so one thing I've always wondered and I mean I've never done anything like this I have no idea but why not generate C? are there obstacles in the design of the language to doing that? because other languages have taken that approach and you get speed and interoperability overnight with everything so there's so our goal is actually partially to that end the original prototype for Ebenius in Ruby and so it just ran this new Ruby interpreter on the existing one so it was incredibly slow but it was really formalizing the idea and making sure if I couldn't get the prototype written I wasn't going to start writing a whole bunch of C code and we've actually started, I wrote a tool that we're currently not using that actually generates C code from a subset of Ruby and I know that Ryan and Eric have done the same the question is can you do it for an entire not just a generalized subset but the entire set of Ruby code you sort of can, I mean if you were to take the model of say you want to generate Ruby so it looks like what objective C looks like and kind of move from there, you could probably get pretty far but there's a lot of things that just don't map and the amount of time and effort that people, there have been two three projects to that end so far that have sort of stalled in the water for the most part. The body of work and the amount of dynamicism is just pretty much, is really too big to fit within a normal C runtime for the most part you need to have all, what happens is that you start to do it and then you're like oh okay well we need to be able to have some way of adding all these methods to it and okay let's do that and oh you know I'd really be able to like save this thing, okay let me just save this and you know a year down the road you basically have got 1-8 re-implemented so I think the short answer is stack dynamic, there's a complete different feature set and it's impossible to do if you're going to write any of these dynamic languages in C you can do it but you really end up with a virtual machine run time that does interpretation. You can't do direct mapping across so if they answer the question yes it's MRI and then so no I don't know I wouldn't go that far right because if you take what IronPython today compiles down to static assemblies and so you could just run these set of static assemblies and their libraries right that they have dependencies on through the engine adepton compiler right that's inside of the CLR so that you could essentially generate x86 binaries of these things so we're not going to go C right but go straight into x86 code let's say right and if you think of C as portable assembly language why not right but what ends up happening is and you know Charlie never talked about this earlier today there are cases where if you sat down and you're like okay I'm going to write a program that takes this Ruby code in and outputs C and I'm going to write a big C utility library to augment whatever you know functionality is missing in C a lot of times if you do that basically AOT ahead of time you're going to end up with it being a lot slower than if you were to exploit the fact that you're dynamic and that you can learn from multiple runs of the product and actually do JIT and you know that's so nothing new under the sun wise people tried doing this before you know they've done this with a lot of other languages and what a lot of they ended up doing was saying you know what this is kind of slow if we do it ahead of time you know we've got all this great really rich information at runtime how can we just compile at runtime and then we're back to where we are today so for the most part so Profile Guided Optimizations right it's one of these things that people have done a lot of lip service to but you know to my knowledge there aren't any significant commercial implementations of anything right that have significant you know profile guided optimizations there's there's a variety of researchy tools and those kinds of things for doing you know basic block rearrangement and memory and that kind of stuff right to try and fit adjacent chunks of code into the cash lines and that kind of stuff but you know still even with all of the the amount of brain power that's being expended on this it still hasn't reached a commercially produced thing that people have used so I don't know like not to say that the idea isn't without merit right it's just that it's just a much harder thing to really get going than I think it appears to be I guess for us I don't want to implement or wire in a garbage collector I don't want to implement or wire in a machine a virtual machine or a register machine or whatever else I don't want to implement the boundary between the libraries all that stuff is basically there in the JVM and hundreds of people put time in on it I'll just make it as close to that as possible and let them worry about the issues of garbage collection and threading and network I.O. and everything else kind of push the project problem off on people who are writing general purpose VM stuff to run a lot of different things and just concentrate on how I can make it work as well as possible on that platform but just to be clear though right like when I said ahead of time compilation right that's not compiling down a native X86 code that runs without the CLR installed on the machine right so this is still X86 code that's still run managed under the CLR that's still using CLRJIT well not the JIT but CLRGC but it's still using but it uses the utility library as well potentially right depends on what you're using so I mean you know the amount you know you I don't know what more I can say what a wonderful note okay that gives us about 10 minutes before we need to get back thank you guys for coming out and taking some time