 I'm Chris Seaton, this is a talk about Ruby's C extension problem and how we're solving it. I work for Oracle, I work on a new implementation of Ruby called JRuby Truffle. I'll talk a bit about what JRuby Truffle is and our progress on it, but the main thing I want to talk about is the C extension problem and why it is we need to solve it. Oracle wants you to know this is just a research project, so you shouldn't buy anything from Oracle based on this being a real project that you can use. It's just research. So lots of people want to make Ruby faster. Lots of people run applications and they see them not running as fast as they'd like. Maybe they're moving to other languages because they aren't as fast as they'd like. MRI, the main Ruby project, they've been trying a tracing just in the past. They're trying a de-optimisation engine. They plan to make Ruby three times faster by Ruby 3.0 if they can. JRuby have always been trying to make Ruby faster. They run on the JVM and they use the optimisation as that gives you to make it faster, and Rubinius uses LLVM to make Ruby faster. It's a JIT written in C++. AppVaya have hired someone, a Ruby fellow, to make Ruby faster. In the past, some of you may be newer to the Ruby community, so I don't remember there was a project called Maglev, which was another alternative implementation of Ruby to make it faster, and even IBM are now working on it with OMR, which is another plan to make MRI faster. These are all applying optimisations, new ideas about how to represent Ruby programs and how to make them faster. The traditional way to make Ruby faster that works today and people have been using it for years is to use C extensions. The idea is that you have a Ruby program that's running on the Ruby interpreter, and you can write an extension to the Ruby interpreter in C. You compile that with a C compiler like you normally would and you get a binary library which you can plug into Ruby. This extends the Ruby interpreter effectively and gives you new methods that appear as if they were written in the core library. You can use those in your Ruby program, and they're about as fast as methods written in the core library, which historically has been pretty good performance. This has been an effective way to increase the performance of Ruby. I'll give you an example of what that looks like. We have here a clamp routine, so it clamps a number between a minimum and a maximum. We've come from some real code, a library called pst.rb, processing Photoshop files. This is slow, and it's slow because it does lots of things. It creates an array of minimum, the number, and the maximum. It sorts it, which creates another array, and then it indexes it to take the middle one. This is an effective way of getting a number clamped between two values, but it's slow. What pst.rb provides is a C extension called pst.native. In this one, they write it in C code instead. This is a C function. It makes the self-parameter explicit in C. They convert the parameters from Ruby numbers to C numbers, and then they use some simple C logic to work out the value clamped between the two. That's faster because it doesn't do things like allocations. It doesn't run a sort routine. Therefore, that is an effective way to make that faster. But C extensions hold us back. They've been an effective way to increase the performance of Ruby so far, but there are some really key problems with them. This is the model people have of C extensions in their head. They think there's the Ruby interpreter. They think there's the C extension API, and then they write their C extension. They assume that the communication between the two is nice and ordered and goes through a proper interface to find in ruby.h. They assume that other implementations can use that. They assume that if you've got an API, you can just swap out underneath what you're using. They think JRuby should be able to implement that, and Rubinius should be able to implement it. Rubinius is written in C++, which is similar to C anyway. It really seems like it should be able to just plug in effectively. The bad news is that isn't a thing in practice. There is no Ruby API. All there is is the entire internals of Ruby dumped out into a header file. You can use any parts of the internals of Ruby, and poke around and you can do anything. In reality, it's much more like this. The C extension has thousands upon thousands of ways to reach right into the internals of MRI. It's really clear how MRI works when you use the C extension API. That's obviously a bad thing. When other implementations of Ruby try to implement that C extension API, they've got all these things coming in them from all angles, wanting to do all sorts of stuff, and they've got no idea about how we're going to hook that up with all the tools that are demanding so much of us. This isn't just a problem for alternative implementations of Ruby. This isn't the case of the alternative implementations moaning and saying that everyone should change to suit them. Because of MRI is also not able to develop and not able to become faster because of these restrictions. As they try to make MRI faster for Ruby 3, they're going to find that as they've got good ideas of things they want to change, the C extension API will hold them back because there are people trying to reach in and use things that they want to remove or change. I'll give you some concrete examples of things which reach in and cause problems. This is from OpenSSL, so the OpenSSL, a little C extension. This is from MRI's code base. It's a C extension which is shipped from MRI. This should really be best practice, but it isn't. We have this function here, R string pointer, and it gives you the C pointer to the character data within it. This is used by OpenSL, for example, to take the pointer to your password and then to call it a native code function for the OpenSSL API and to do something with it. This makes sense in MRI because in MRI, every string has a character pointer implemented in C that has that string data in it. But in other implementations of Ruby, we may not want to have a real character pointer. Obviously, in JRuby, they have a Java byte array, which isn't the same thing as a character pointer at all. And as we get more and more optimisations, it becomes even more problematic. I'll show an example later of we're trying to use an implementation of strings, which isn't even a linear sequence in data. So trying to get that character pointer reaches right in and makes loads of assumptions about how the data structure is implemented. It's also done for arrays, so you can get the internal pointer to the values in an array, and the PSD extension uses this, for example, it gets the native array which represents the pixels in an image and then processes them. And this means that you're restricted to representing your Ruby arrays as a linear sequence of very heavy-weight Ruby values for numbers. I would like to represent arrays of just normal numbers, a simple compact arrays of numbers, but we can't if we implement this C extension API directly. The C extension API, when I say it, it exposes all the internals of Ruby, I even mean it exposes the internal fields of the structures which represent Ruby objects. So this is our data, which a data is a class in the core library, and it inherits from the Ruby basic object, and it has fields for any data you want to store in there, and it has even C function callbacks for marking that data and freeing it, which is the garbage collector users. And then you have macros to manage these. These macros were in effort to make the C extension API better, so they thought if we wrap up accessing these fields as macros, then we can embed the macros in different ways. But in this case, the macro is actually used on the left-hand side of an expression, so it needs to evaluate to something which is assignable, so that hasn't helped us much. And it's not just about being able to implement the API. The C extension API makes C extensions slow. When you call foo.2s in Ruby, conceptually you look up the method to call from that name each time. You go through all the modules involved and see which method to call, but in reality what we do is we cache what we found last time and use that value the next time, because it's not transparent for you. But we can only do that in Ruby because we're controlling how execution happens. And in C when you call rbfun call to call a function, there's no way to store a cache to do that. So this means that a method call done in a C extension is actually slower than a method call done in Ruby. Ages ago in the past, Ruby didn't really have things like inline caches and stuff like that. It worked in quite a different way. So this wasn't a problem in the past, but now it is a problem, in terms of optimising and providing lots of clever computer science optimisations on code, the six extensions are a black box. If I look as a compiler at a code like ad, where ads together two numbers and we just simply call that, I can look at that and my optimiser can see that that reduces the value 16, provided some assumptions about things not being monkey patch and stuff like that, which we can get around using other techniques I've talked about in the past. So the C code is compiled to native code ahead of time. The source code is long gone and we can't inspect the compiled machine code in any meaningful way. So I can't compile ad 14.2 to any value if it's in the native code. So again, this makes the native code slower than the Ruby code if we have a powerful optimising compiler, which MRI wants to add, so they're going to see this boundary as well. Previous solutions to the C extension problem. So people do know this is a problem and they do want to solve it. Denial, this has been the first proposed solution to the C extension problem. So FFI and FIDDL are two libraries which allow you to call C functions directly from Ruby. So instead of writing C code, which calls them, you simply call them directly from Ruby. These are effective ways to write C extensions. But the problem is there's 2.1 billion lines of code inside the Ruby gem repositories and half a billion of it is C extensions. It would be nice perhaps if people used FFI to write C extensions, but they don't. So there's a little point arguing about it. We want to run people's code today, we need to run C extensions. There's no point going up to them and telling them they're wrong when they should be writing it in FFI. Even the C extensions inside MRI don't use the FFI. So there's no point arguing about it. This is what the FFI looks like. You can simply say, I'm using this library and I want this function and these are its types and it magically appears. But if people aren't doing it, they aren't doing it. Bargonyng. We can attempt to implement the C extension API as best as possible alongside our optimisations. This generally involves a lot of copying. So if we have our string represented in a really clever way internally, when we want to expose that to the C extension, we could copy it all out to a big C array. But the problem with that is it's an unbounded problem size. Your string could be gigabytes in size. And if each time you go to C and back again, you copy the whole thing, everything's just going to grind to a halt. JBB used this approach in the past and Rabinius still uses it today. JBB, when I tried, only ran about 60% of the C extensions that I was interested in. Rabinius ran 90%, which is better. But the worst thing was, when they didn't work, there was no error telling me this is incompatible. They just ground to a halt and, you know, continuing to make progress but a really, really slow pace because there's the copying, the things they decided they wanted to copy grew and grew, they didn't really work. And there was no clear failure point, they just didn't work. So that was a limitation. We can try to improve the C extension API from MRI over time. A JavaScript as in V8 and a Java C extensions API. I don't really have these problems because they've designed it with the knowledge of this problem ahead of time. So they have better designed APIs which don't expose internals. And there is steady progress towards doing this in MRI and it has helped. But as I say, even OpenSL doesn't use those better interfaces. So if MRI aren't going to do it themselves, we can't really tell anyone else to do it. So from the C extension documentation it tells you don't touch pointers directly. Don't use RRA pointer, don't use RStream pointer. And if people didn't, there wouldn't be so much of a problem. Depression. JRuby unfortunately had to give up on their C extension work. They had someone really clever work on it for quite a long time. He managed to get it working. But as I said, it would have this problem if it was generally ground to a fault. And in the end they decided they unfortunately didn't have the resources to maintain it. The original did have it moved on and they decided to remove it entirely. Maybe it will return in the future. They could try using the same approach as we are in JRuby truffle. And acceptance. JRuby encouraged people to write Java extensions instead of C extensions, which is a technique which works fine. But as I said, if people aren't writing FFI, then we can't make them write Java extensions either. We could also try to optimize Ruby while keeping the internals the same. So IBM's OMR adds a new GC and JIT to Ruby to MRI while keeping support for C extensions. But the techniques they can use therefore are very limited. And the performance increases we can expect from that therefore are much more modest. Interlude JRuby truffle. So I'll give an introduction to our project and how it works. So there's already an implementation of Ruby which works on the JVM called JRuby. But the JVM is a bit of a black box which makes their work very difficult. When they want to optimize code what they can do is they can pour bytecode into the top of the JVM. They can do it the best bytecode they can but they pour it into the top. And at some point within the JVM there's a JIT compiler. And it's an excellent JIT compiler. But the route to it can sometimes be quite torturous. Especially for the bytecode they emit from JRuby which isn't the same as Java bytecode always. And often it fails to reach the JIT or when it reaches the JIT it doesn't quite do what they'd like with it. Our big idea at Oracle is to take the JIT outside the JVO rewrite it in Java and expose it as a Java library. And that means you can talk back and forth to it and you can tell it much more precisely what you want to do. That's quite tricky to do so he wrote a framework on top of that called truffle which helps you to write languages and talks to GRAL on your behalf. We took code from MRI, we took code from JRuby and we took some code from Rebinius and we wrote a new implementation of Ruby based on JRuby using truffle as an API that works on top of the GRAL VM which is simply the JVM combined with the GRAL compiler. We're part of the JRuby repository and part of the JRuby project and we are part of their releases today. The way truffle works is if we take some expression like A plus B times C we can express that as a tree. We had the end of day keynote yesterday when he was talking about how we express code as an AST which is a data structure like this. And what we do in truffle is we can take that code, that AST and we can compile it down to a single module that includes all the logic involved in executing that tree. Then we have a JIT compiler which takes it and turns it into a graph which looks like this. This is an actual screenshot from compiling that code using our tool and then we can produce optimized machine code from this. This machine code is some XA664 machine code that multiplies two numbers together and then adds another number to it and you notice if you do optimizations here such as Ruby has overflow that converts to a big num we do that by handing that by jumping on overflow. So if the multiplication overflows we jump off and do something else. If the add overflows we jump off and do something else but if it doesn't we simply keep this linear sequence of instructions. So this code is almost as good as you get from an AC compiler. The only extra thing you get from this is the JOs and your processor is very clever and will actually do something called fusion of the JO with the add so it's almost as if they're not there in terms of cycle counts. So from Ruby code even though it has things like monkey patching and overflow we can actually produce a machine code that's almost as good as C. So going back to C extension this is our radical solution for C extensions. The current model as I said we've got Ruby that runs on a Ruby interpreter but C extensions are compiled separately and then plugged into the Ruby interpreter. Our idea is simple. We're going to take the C code and we're going to interpret it using a C interpreter. We already have a Ruby interpreter let's write a C interpreter and then run it on top and then when we want to make changes to the way Ruby works we simply make changes to the way our C interpreter works to match it. It's slightly more complicated in reality. We don't want to have to deal with all the complexity of the pre-processor and type checking and stuff like that. So what we actually do is we take your C extension we compile it using the LLVM C compiler that's C Clang and that produces some intermediate representation. That's like a really simple version of C that we call IR and then we interpret that. The other benefit of this is if you have other languages like Go, Rust, C++, Objective C as long as they can all compile to LLVM's intermediate representation we can run that on top of the interpreter. So if we take a real C extension such as this clamp example we had earlier this compiles to IR which looks like this. Now it looks like that's really complex but let me break it down for you. I'll zoom in on just the bit which does the logic. So it still looks complicated but if I write this as pseudo code in Ruby you can see that this ICMP actually just does a compare the branch just does like an if and then it does go to. Ruby doesn't have go tos obviously but assume they're there and then we have a compare less than which is here so really we have a language which is very simple. We've already written interpreter that works really well for the whole of Ruby. If you imagine we're writing something that is for much simpler for this little pretend language here that's actually quite easy to do. You probably think of C as being a really complicated deep highly technical language. It's used for highly technical stuff because it relies on the machine architecture most of the time but in terms of a language it's really simple. It's much easier to write an interpreter for C or LLVM IR than it is for Ruby. Then the magic starts to be that we're optimising these two languages using the same technology. We produce these trees of the programs for both Ruby code and C code and when it gets to truffle and growl our optimising system they don't care which language it came from. Even more than that they don't know which language it came from. They simply ignore that. So we represent the Ruby code and the C code in the same way so we can actually optimise them together. If you have a Ruby function that calls a C function in your C extension that calls back into Ruby we can inline those together and optimise them together as many barriers you have between them all disappear. Some interesting problems we've found are their solutions. Because we're implementing the C extension API from scratch we decided we could actually do most of it in Ruby. I think Rabbinius does this to a little extent but we're doing it for most of it. So if we have a C extension API function like fix2int which takes a fixed arm and turns it into a real C integer we wrote the entry point, we wrote our C code that simply says invoke using truffle and then we wrote an implementation of it in Ruby. So every time you call a simple order function in the C extension API it goes back into Ruby. And this is where everything works out because if we can do whatever we like in Ruby if we've written a clever way to access strings we can simply reuse that clever way to access strings from our C extension code. So taking strings as a concrete example as I said if you take a pointer to a string and you get a pointer to it and you can index it and you can read the exact character and by the time you get this character pointer it's been forgotten that it came from a string it's just raw C data. We represent strings using a technique called ropes. There was a talk by Kevin Menardat, Ruby Kai Gay. The idea is if you concatenate two strings let's not copy anything let's just remember they were concatenated. So we represent hello RubyConf if you concatenate them as two separate arrays of characters. The LLVMIR to read this simply says getElementPointer and then we can implement getElementPointer however we like because we're interpreting it. So we can say if it's a string go back and call the actual string indexing routine and that string indexing routine can go and walk through this data structure and find a particular character. So we solved the problem of being able to access the raw data by giving you the illusion of having the raw data. The C extension believes it has a character pointer to a string. What it really has is an object that says this is the Ruby object and this is where you want to index it and it goes off and it reads that. But there's no way of detecting you've got that in C in your C extension code you think you've just got a raw number but it's an illusion you don't have a number and the only way to do you realise the illusion actually would be to try and print the number out but there is no address, there is no number. Results. So this is from an earlier piece of work we did. We actually started off trying to do this C interpreter before we tried the LLVM approach. So these results are slightly old. We haven't got quite to this stage yet of our new implantation using an LLVM. So this is the performance of benchmarks from Chunky PNG and Oily PNG, the native version and PSD.rb and PSD native, the native version is how fast the code is compared to the pure Ruby version. So there's a pure Ruby version of what is code and a C extension version. So MRI with the C extension gets you 10 times faster than running the Ruby code. And that's a successful result. C extensions were making things faster. When we run Rabbinius with the C extension it's only around four and a half times faster. And when we run JV with the C extension it was even slower. As I said JV we aren't using this approach anymore to six, but this is still how Rabbinius expects you to run C extension. And the performance is lower than MRI with the C extension which isn't what Rabbinius promises and what people use Rabbinius for. When we run Joby truffle with the C extension with our C interpreter it's actually three times faster than running MRI with the C extension. So we're running the Ruby code and the C code in an interpreter with a jit of course three times faster than the native code. And that sounds crazy because people think they can't possibly run faster than native code. It's because as I said it's a black box and MRI can't do anything to optimise access into that black box. And it's uncached when you make method calls or aren't caches. So when we apply these techniques to C we get much better performance. We wanted to pin down exactly where the real benefit was. So we tried turning off inlining from C to Ruby and back and forth and that's the result you see here which is only just faster than MRI with the C extension. A lot of it was to do with inlining between the two languages. If you take a hot loop you can take the C code and the Ruby code and all together and optimise it. And this is from a paper we presented last year at modularity conference. There are some limitations it's true. You do need the source code of your C extension. I'm not sure this is a problem for anyone in reality. I'm not sure if there are any closed source C extensions anyone using. If there are no C extensions you can't worry about them. C extensions in turn using a closed source library that itself doesn't use the C API that's fine. So if you have some part your database and a driver for it as long as you have the source code for the C extension part that's fine. You can't store pointers to Ruby objects in native code. So if you're using a compiled library like libssl you can't give that compiled library reference to a Ruby object. This is sometimes tricky because we don't have any data from Ruby in the native object. But the Ruby object may not really exist. We may not want to be able to move it in the GC. So what we've written is this little API for turning an object into a native handle and back again. So if you want to store some data in SSL you need to convert it to a native handle. This is what Rubinius and JRuby were sort of doing anyway but they were doing it for all data. We're doing it just targeted in a little couple of places. This is because we don't want people to have to modify their C extensions. It's okay for OpenSSL because that ships with Ruby. But we want to solve this problem for other C extensions so we're still working on that. By the way, I criticised the FFI because nobody was using it but it is probably still the best idea to write your C extensions in the FFI if you can because that's why support across all the implementations of Ruby although we don't actually implement DS and JRuby Chuffle. That would be a great internship project if anyone's interested. I think it's a great idea to write a Ruby baseline version. This is what PSD and Chunky do and that means that maybe the pure Ruby version will actually run fast enough anyway in JRuby Chuffle or Ruby 3.0. Java extensions. I said in the past that the way that JRuby has solved the C extension problem is encouraging people to write Java extensions. Ironically, we have exactly the same problem as C extensions and there is no well-defined JRuby API. It's just all their internals exposed. They created the same problem that they have themselves from C. So we have trouble hooking up all those things and we don't therefore at the moment support Java extensions but we can do exactly the same thing. We can take your Java extension written for JRuby, we can compile it to Java bytecode and then we can write a Java bytecode interpreter and provide the same kind of extractions and allusions to make it work on JRuby Chuffle. This idea in general could be a direction as well. Evan Phoenix at RubyKaiGay last year talked about the idea of storing the LLVM IR of the MRI implementation code and using it to jet at runtime. That's a similar approach to what we're doing. Let's end my talk about C extensions but a quick status update on JRuby Chuffle in general. We're running classic research benchmarks around 10 to 25 faster than MRI and around 10 times faster than JRuby. Truffle is in red here. JRuby we've invoked dynamic in dark green and MRI in orange. We're not much faster if your memory allocation bound or if you're bound on the performance of something like Big Integer because those aren't the things we improved the performance of particularly but if you're doing some highly computational code like an N body simulation it's around 40 times faster. But I understand you people aren't doing N body simulations, it's okay. The MRI people who are focusing on making Ruby three times faster have got a benchmark they call Opt-Carrot, it's a Nez emulator. This is what they're going to try and make faster for Ruby 3.0. We run this around nine times faster than MRI. Invoked dynamic JRuby runs about twice as fast as MRI at the moment. And all implementations are looking at Opt-Carrot at the moment so these numbers will probably move around. But we run it fast enough, the nine times faster is enough to make it smooth and playable where it didn't very smooth and playable on MRI. In terms of completeness we passed 99% of the language specs it was 100% and then people helpfully added some more. We passed 96% of the core library specs and 78% of the standard library specs but that last one's a bit misleading because the coverage is a bit weird. We're also now running Rails tests we actually sport 100% of active support and active model most of action pack some of rail tees and some other stuff like active, record active action view. Basic functionality does work there. So we are now able to run Rails we have a basic blog application using Rails that we wrote ourselves that was simple enough and we can run that it works. So it's been several years, three years or so we are now running Rails. So why can't we run any real applications? If we've got that far, if we support such a high proportion of the language specs why can't we run anything? Well it's these C extensions, this is the most important thing at the moment. There's still a work in progress and we've got almost no database drivers OpenSSL doesn't work yet NOKO URO doesn't work and they are at the bottom of the dependency stack for almost every Rails application and any other Ruby application anywhere, especially the testing people often use NOKO URO to test applications to look at the DOM and figure things out and of course we need to be able to test things to figure out they work. So this stops us running almost any application and the specs also don't have perfect coverage. We do very sophisticated optimisations which means that you can't just test that array accessing works, you need to practice it works with an integer array and a double array and things like that so the specs don't help us there and we also need to tune performance there's lots more stuff there You can try this today if you search for gralotn or there's the full URL you can download a binary table that includes absolutely everything you need that includes our modified JVM includes the gral compiler, includes jruby and you can try it out like that if you search for github gral vm you'll find all our code in one place this is the implementation of Ruby that's just a fork of the jruby repository but then there's the implementation of our C extensions and the jit compiler and everything you want in one place and you can browse the code and see how it works I wanted to find out more if you look at my website I list all the papers and blog posts and articles we do there find us on through node jruby or getter or tweet me if you forget everything else just google for jruby truffle and you'll find the relevant stuff I should say there's a really large team behind this we're actually the largest Ruby implementation team anywhere in the world I think these people work on all sorts of projects but it's a few of the wider team that's everyone who's ever been involved the main people working on Ruby are myself Kevin Menard, Peter Looper Benoit de Lloes, Brandon Fish the people working on the C extensions are Manuel Riga and Mathias Grimmer so this is all their work as well as being mine thanks very much questions? the question was do you need the C source code but not things you link against yes you need the C source code of anything that uses the Ruby C API but then anything else can be a binary so you can use your database driver or any other libraries you're using you just need the parts that use the C API as C source code or LVMIR actually but that's not if you want to do it to obfuscate your code LVMIR doesn't do it so you basically need the source code the questions were is there any locking going on because the C code normally doesn't do locking no we're not doing any locking we're following the rabbinius and joby approaches the threading and they just judge it as an object you use yourself and anything else is uncontrolled we are doing novel research into a formal memory model for Ruby which Peter Looper is working on has talked about a couple of comforters so we're trying to work out what the rules should be agree them across Ruby implementations by the moment it's Wild West World there's no real rules to follow so as long as stuff works we're happy the question is when you use native code of course you have to actually do that copying and pass it in and out and real libraries mostly real six sections mostly use native libraries so do you have to do that a lot yes and we're not sure how big a problem that is so the as you said the PSD and chunky stuff that's well contained and that's the best case for us we're working on OpenSSL at the moment and it seems to be effective enough there so all I can say at the moment is I'm hopeful for other gems we won't have to do too much copying to pass it out to native code you can of course if you want to if you find it a problem get the source code for everything and run it all in the C interpreter the C interpreter is a full conformant implementation of C so it should be able to run everything with OpenSSL I decided for things like timing attacks and installation complexity it was probably blessed to use the native library but yes we are unsure it's going to work perfectly for lots of libraries it's a research project that's why we're exploring what we can do so the the question was were the results I showed on warmed up code yes they're on thoroughly warmed up code we use a jit compiler so it does need time to warm up and to compile everything Joey truffle is about as fast as Ruby 1.8 when it starts probably a bit slower now and then it gets much faster after it's running it's a trade off we're deliberately making this trade off that if we spend more time at start up when it's optimized then stuff runs faster later but we have a solution for the fast start up case where you need that we're working on a head of time compiled binary of Joey truffle that doesn't use a JVM it's simply a big executable and that starts really fast because all the code is already compiled and then it warms up the same way the normal one does but that's not open source at the moment but we have to do some sort of trade off in order to cut through Ruby if we want something has to give somewhere so we trade off a little bit of memory a little bit of start up time and we get the result we want running after a longer period of time how much memory? it's hard to quantify because until we can run a real application we don't know so we can look at the baseline and say it takes four times as much memory as Joey would run hello world or something like that I've got the numbers, I can't remember what they are exactly off to my head but I'd be lying if I told you I knew exactly it's probably unlikely to work well on a 500 megabyte dino, that's true but the idea is if you run the idea is if you run one big instance which can handle lots of clients and can share optimised code between them we think that would be a better trade off than running lots of MRIs on smaller dinos ok, thanks very much