 here, I guess. Hi, everyone. My name is Kevin. I've met you before. I work at Oracle Labs, which is a research group within Oracle. In particular, I work on a team that specializes in virtual machine and compiler technologies. I'm here today to talk about Chelf of Ruby, which is our implementation of the Ruby language, and how we've improved our startup time with a new tool called the Substrate VM. Before I get started, I do need to inform you that what I'm presenting today is research work out of a research group and should not be construed as a product announcement. Please do not buy or sell stock in anything based on what you hear in this talk today. All right. Improving Chelf of Ruby startup time with the Substrate VM. It's kind of a verbose title. I'm not super creative when it comes to these things, but it is quite descriptive. If you're new to Ruby or you don't kind of keep track of all the various Ruby implementations out there, you might be wondering what Chelf of Ruby is. Chelf of Ruby is, as I mentioned, an alternative implementation of the Ruby programming language. It aims to be compatible with all of your Ruby code, but provide new advantages. Compared to other Ruby implementations, it's relatively young. It's about four years old now. I like to think of it as a best of breeds approach. We actually pull in code from JRuby, Rubinius, and MRI. Like JRuby, the core of Chelf of Ruby is written in Java, so there's a natural ability for us to share some code there. Unlike Rubinius, we want to implement as much of the language in Ruby itself. So we're able to leverage a lot of the work the Rubinius team had previously done in implementing the core library in Ruby. Then we pull on the standard library from MRI, and more recently, we've begun being able to run MRIs, C extensions. We actually run MRIs, open SSL implementation, and Zlib. We're currently 97% compatible with the Ruby core based on the core library specs from the Ruby spec project. We hit 99% on the Ruby language specs. These test suites are really nice, but they're not as comprehensive as we would like, so we've also spent a fair bit of time testing the top 500 gems out there. Active support is one that's really popular, so I use that as an example here. We're 99% compatible with that. We don't quite have database drivers yet, but that's something we are working on, so we can't run all of Rails yet, but we're closing the compatibility gap quickly. So Truffle Ruby is implemented in Truffle. Truffle is a language toolkit for generating simple and fast run times. So with Truffle, you basically just build an AST interpreter for your language, and an AST interpreter is about the simplest way you can implement a language. They're very straightforward to develop, they're easy to reason about, they're easy to debug, but the problem with AST interpreters is they tend to be slow. We fix that by pairing Truffle with another project called Grawl, which is our JIT compiler. So Grawl is a compiler for Java written in Java, and it has hooks from Java. Truffle is able to use this to call into Grawl and optimize these AST interpreters through a process called partial evaluation. This is a big deal because a lot of languages start with an AST interpreter, and then they hit a performance wall and find they start building a language-specific VM for that, and building a VM is a lot of work, it requires a lot of expertise, and it's hard to get right. MRI went through this itself, so Ruby up through Ruby 1.8 was a simple AST interpreter, and Ruby 1.9 introduced the YARV instruction set and virtual machine. So what we want to do with Truffle is just build your language, stay in the realm of AST interpreters, it's really simple, and we'll take care of the performance part of that with Grawl. In addition to that, as a language building tool kit, Truffle provides some additional features like debugger, profiling, general instrumentation, so things that all languages need, you get for free out of the box. In addition to some of the JIT controls, so inline caching and being able to prevent methods, you know aren't going to JIT very well from compiling at all. And then finally, Truffle has this polyglot feature, and it's the first class feature in the framework, so all languages implemented in Truffle can call into and out of each other, and because they all inherit from this base node class hierarchy, Truffle nodes from one language to the other can be mixed together very easily, and then when they're submitted for compilation with Grawl, we're able to eliminate that cross-language call boundary, and so for instance, you can call JavaScript code from Ruby, and then when it gets optimized, there is no performance penalty for calling into JavaScript, so this is a, like I said, it's a first class feature of Truffle, and to enforce that, some of Truffle's functionalities actually implemented as languages, domain-specific languages within Truffle. So if you have been following Truffle Ruby, you might be wondering what we've been up to. Over the course of the last year, we actually spun out of JRuby, so we used to ship as an alternative back into JRuby, and at that time we were called JRuby plus Truffle. We're now Truffle Ruby. We've begun running C extensions from MRI. Last year, Chris Seaton, who was on our team, gave a talk at RubyConf outlining a blueprint for how we could run a C extension with MRI and some of the work we had been doing there. Since then, we've now run OpenSSL, NokoGiri, JSON, UNF, and we've begun working on some of the database adapters, so this approach is working, and the result's really promising. We now have Java interop, so you can call Java methods on Java objects from Ruby using a nice Ruby syntax. If you ever use JRuby and its Java interop, it looks very similar, and we've been working on improving our native calls. So Ruby has this rich interface for making underlying POSIX calls. Truffle has a new feature called the native function interface, which is provided as one of these DSLs within Truffle, and it provides almost like Ruby's FFI, that kind of functionality for Java in Truffle languages in particular. So we've been making some really good progress. In the short time the project's been around, we've achieved a high level of compatibility. Performance has been good, but we've had one sticking problem, and it's related to start-up time. So applications typically go through three cycles. You should have start-up, warm-up, and then steady-state. And start-up time is the time the interpreter uses to basically get itself into the state, ready to start running your code. Warm-up time is when it initially starts running your code, and at this point it's cold, so it's going to be slow. And the idea is that as it executes it multiple times, it should get progressively faster to the point where we call it hot and thus warm-up. So if you have a JIT, this is when you'd be profiling the application, submitting things for compilation, and actually compiling them. But even if you don't have a JIT, your application could still have a warm-up phase. The operating system is going to do things like populate file system caches, and you'll be populating cache lines in the CPU and things of that nature. And then eventually you hit your steady-state. So this is where your application spends most of its life. And most applications hit some kind of event loop or something like that where it'll remain up until it basically stops executing. Very few applications thrash around from there. So you could broadly classify applications as two types, those that are long-lived and those that are short-lived. And in a long-lived application, you spend most of the time in the steady-state. So Truffle Ruby, like a lot of languages with a JIT or implementations rather with a JIT, will have kind of a slow start-up and warm-up time and make that trade-off because it'll generate very fast code for the steady-state. And the idea is that your application will spend so much time in the steady-state that spending that upfront time to generate fast code will more than pay for itself. But a lot of Ruby applications are actually short-lived. We all use IRB or PRI. We have test suites. And in this case, the bulk of the time of your code could actually be in the start-up phase. So here in this graph, the start-up time doesn't actually get any longer. It just now accounts for a larger percentage of the overall application's lifecycle. And then you'll get into that warm-up phase. And that could largely be wasted work because you'll spend so little time in the steady-state before you exit that you don't really gain any benefit from warming up. So Telfel Ruby is kind of optimized for the long-lived applications and hasn't spent a whole lot of time optimizing for short-lived. So we wanted to try to improve start-up time. And in order to improve something, it's helpful to see what the current status is. So I ran a very simple Hello World application. And what we can see is MRI is hands-down the fastest. It runs that in about 38 milliseconds. JRuby runs it in 1.4 seconds. And Telfel Ruby lags it behind here at 1.7 seconds. Of course, nobody really runs Hello World production, but I thought that was a nice way to quickly illustrate things. To get a better sense of what a real-world use case would be, I'm turning back to the Ruby spec suite. So this is that test suite I mentioned at the beginning that we used to evaluate our language compatibility. So what's nice about the Ruby spec suite is it's modular. It's broken up into different components like tests for core language features, tests for the standard library, and so on. And the idea is that multiple Ruby implementations can pick very subsets of this test suite in order to progressively add functionality. So the idea is you're going to start off with a new implementation and it's not going to run all of Ruby. So you pick a subset or one of the components of the spec suite that you can start running and evaluate progress that way. So what's nice is this is a good way of testing something that will run on multiple Ruby implementations and not favor one or the other. Another interesting aspect of the spec suite is it ships with a test runner called Mspec that looks and feels a lot like our spec. So you're going to have matchers, you're going to have .should and things like that. And I'm looking in particular at the language specs. So this isn't the largest test suite in the world, but it's not the smallest. It has about 2100 examples and 3800 expectations. So I think this is a pretty good proxy for what we can expect when running application-level test suites. Mspec looks a lot like our spec. We're running a decent number of examples here. So when we run these on the various Ruby implementations, again, MRI is hands down the fastest here. It can do all those in about a second and a half. JRuby comes in at 33 seconds and Truffle Ruby, again, is at the end at 47.5 seconds. And this is really our problem. We're making great strides in improving our compatibility, but a lot of people that are running test suites and things like that discount us out of hand because this part is a bit too slow. So you might be asking yourself, well, if Truffle Ruby is so slow in running these tests or just doing startup time, what do I gain for that? And our advantage has been in peak performance. So to evaluate peak performance, I'm turning to the opt-carat benchmark. If you saw Matt's opening keynote, he presented some numbers here in the context of MJIT. So what opt-carat is, it's a benchmark that the core Ruby team is using to evaluate its progress on its Ruby 3x3 initiative. So basically it's a Nintendo entertainment system emulator written in Ruby, and it basically runs NES games and presents a score, which is the number of frames rendered per second. And Ruby 2.0 runs these in basically 20 frames per second. So if Ruby 3 hits its three-speed-up objective, it'll run at 60 frames per second. As an interesting aside, that's actually the frame rate that real NES hardware uses. So coincidences all around. Matt's had indicated that MJIT can run at 2.8 times what MRI 2.0 can run. So they're closing in on that 3x goal. But I ran opt-carat with these same Ruby implementations. And what we can see is that MRI 2.3 runs about 23 frames per second. JRuby roughly doubles that at 246. And trust me, Ruby runs about 8.5 times at 197 frames per second. So we've made this trade-off where our startup time hasn't been as great as we would like, but our peak performance is really, really nice. Now, I've been presenting this kind of in the guise of short-lived versus long-lived applications in terms of application profiles. But we can make this more of a human problem by considering it as a development versus production issue. So you typically run prior IRB in development or run your test suites in development. And this is where you'll really have your short-lived applications. But in production, you tend to have a long-lived application profile. So balancing between the two can be a bit problematic. So I'm actually going to take a step back here and relate some of the experiences I had running JRuby in production, which is something I did before I started working on truffle Ruby. So JRuby has some of the same problems. It's startup time isn't as nice as MRIs, but it does have a peak performance advantage over MRI. So with teams, we had to decide what we wanted to balance for. One option is just always use MRI and optimize for developer time. The idea being that if your development team can move quicker, or they're happier, they can deliver more value for your customers. On the other spectrum, you could just say, okay, we're going to always optimize for peak performance. So we're going to use JRuby, or in this case, everywhere. And the idea being that we'll deliver more customer value by having a faster production product. And then we'll just accept that the development team is going to have to incur some additional costs, just running tests and things like that. A third option is running a hybrid model. And I actually was never able to get this to work, but I'm aware of teams that did. And what you would do in this situation is run MRI locally. So you get that fast development time, but deployed JRuby in production. So you would get your peak performance there. But there's an inherent risk with that, because MRI and JRuby have different run times. And while JRuby has very high compatibility with MRI, in these edge cases, you may see differing behavior. So if you're deploying to a different environment in production than you run locally, you may have some subtle hard to find bugs there. CI can certainly mitigate a fair bit of this, but it's a risk nonetheless. And then you may actually hit some technical hurdles, because things like the Ruby engine environment variable, or, sorry, global will have different values. You could have native extensions that are different in C than they are in Java and so on. So having experienced that, this is actually something when I came in to travel with Ruby, I was really interested to know if we could handle better. And I think we can. So at this point, I'm going to introduce how we're going to do that with a new project called the Substrate VM. So the basic idea is travel Ruby being implemented in Java and Ruby. It runs on the JVM, but we can also retarget it to another VM called the Grawl VM. So if you run on the JVM, you get this AST interpreter, but you don't have RGIT, so Grawl is not in here. You'll still run through the JVM's JIT, but it won't be as fast. The point, though, is that we can target the JVM and it will be functionally correct. We can also target the Grawl VM, which is the JVM packaged up with Grawl, and you get that Optimize and Compiler, which is how we're able to deliver those opt-carrot numbers. But the idea here is to slot in yet another target, the Substrate VM. So the same code base now can be used on multiple VMs. The Substrate VM is really a different kind of beast, so it has two core components to it. It has an ahead-of-time compiler, also known as the native image generator, and then it has VM services that it will link to into the application. So the ahead-of-time compiler takes a Java application, it takes the JDK, and it takes any additional jars or libraries that your application could rely on, and it compiles all that directly to native machine code. So the program in this case is the truffle Ruby interpreter. So the ahead-of-time compiler is really treating Java just like we're C or C++ or Go or what have you. What you get out of this is a static native binary, and the JVM is completely gone. That binary will then have the Substrate VM linked into it, so you still have garbage collection, hardware abstraction, and some of these other features you expect from a virtual machine. To help illustrate the point a bit more, what I have here on the top is a, I guess it's a little hard to see, but it's a simple add method in Java, so it takes two ints, a and b, and calls math at exact on it. And if you compile this for the JVM using Java C, what you get is the code on the left, and this is, well, it's hard to see, but it's Java byte code. So if you've ever worked with Java, you've probably noticed that you have these things called .classfiles. They contain Java byte code. You feed these into the JVM, which has a byte code interpreter, so it actually interprets your class files, and it'll run in the interpreter until it determines that it's a hot code path, and then it would submit it for jitting using the JVM's jit. On the right-hand side, though, what we have is actual machine code, so this is the output from the native image generator for that method written in Java. So what the native image generator does is it performs a closed-world static analysis over the application, and that sounds like a mouthful, but it's actually fairly simple if you break it down. So every Java application has a main method. If you've never done Java or C, it's basically the entry point into an application. This is a bit different than Ruby, where you can just run a script and it starts executing things. And in Java, we have this notion of static methods, static fields, static initializers, and they're roughly equivalent to Ruby class variables, class methods, and just code you execute in classes. So if you open a class in Ruby, it'll just start executing the code in it, so you can run things outside of methods and so on. But Java main methods are static, so the analyzer starts there and it determines all the classes the program actually uses, and then all the methods used from those classes. And then it throws everything else away. So the JDK, which is a Java's standard library, is quite massive. We wouldn't want to compile the entire thing into our static image at the end of this, so we throw away the code we don't use. It's closed-world, so because at the end of this you no longer have the JVM, you can't dynamically load classes. So everything your application could possibly use needs to be supplied as inputs to the native image generator. And the analysis needs to be a bit conservative, so if you have an interface or an abstract class and the analyzer can't determine concrete subtypes for it at any given call site, it needs to actually compile in all the classes that could be candidates for this. And you need to be careful of that because you could inadvertently compile the whole JDK in. So we monitor this with every push to the Truffle Ruby repository to make sure that we don't accidentally pull in stuff we don't need. And if you're interested in learning more about how that process works, Christian Bimmer gave a talk at the JVM Language Summit this year where he gets into the nitty-gritty details of that. So what's interesting for us on the Truffle Ruby team is we can kind of take advantage of the static analysis because when we load Java classes in the image generator, we're going to execute all the static blocks. We can push computation into the ahead-of-time compilation process. So as I mentioned, we actually implement a fair bit of Ruby with Ruby. So we have core methods implemented in Java, but once we bootstrap the runtime, we then implement, like, for instance, all of the numeral bull in Ruby. So the downside with that is every time you start up the Truffle Ruby interpreter, we need to load these files up and parse them and set up the state. And they simply never change, or rather they change when we update them, but then we would issue a new release of Truffle Ruby at that point. So there's a lot of duplicated computation every time you run things. With the ahead-of-time compiler, we can push that computation into the native image generator and do it precisely once. And then when you start up the static binary, that is the output from the image generator, all that's already calculated for you. So for instance, we're doing this to pre-parse those core Ruby files. So we parse them, we get the AST, and then we just store the AST and then we essentially have a memory blob into the binary. And then when we start up the binary, I would just read it back out of memory. We don't have to spend time doing it in the file system operations or building the AST. We go a step further and we include all the encoding tables. So Ruby has support for 110 encodings or so. And each of those has a fair bit of metadata that goes with them. Same thing with transcoding mappings. So if you want to be able to convert the encoding on a string, they go through these transcoding mappings. And these are things that hardly change ever. So we gain a lot by being able to put those into the ahead-of-time compilation process. And we pre-construct some common strings, so string literals that are used in multiple places. So we don't need to dynamically allocate the bytes for the underlying string. All right. So what does this gain us? The whole point was to improve our startup time. So let's go ahead and take a look at the results there. So as you might recall, this is what we were looking at for Hello World. Truffle Ruby was kind of all the way at the end here at 1.7 seconds. We run this again on the substrate VM. It goes down to 100 milliseconds. So we're not quite as fast as MRI, but we're closing the gap there. Again, nobody really runs Hello World in production. So let's take a look at that test suite again. So we're at 47.5 seconds. On the substrate VM, we dropped to just below seven seconds. Now, MRI still has us beat by, you know, multiple times here. But I think for a development team, trying to make the decision whether they could accept this in development or not, you know, there's a big difference between waiting a minute for a test suite to complete when MRI can deliver results in a second and a half and only having to wait an extra five seconds. So we want to get as fast as MRI, and we're going to continue to reduce this number. But I think we're now into the realm of acceptability for a lot of teams. An actual question though is that we sacrifice peak performance. I kind of pitched all this initially is we have slow startup time so we can have faster steady state performance. So if we look at those op caret numbers, again, we're running the substrate VM, we actually do take a bit of a hit. So we dropped from 197 frames per second to 169. This still makes us about eight times faster than MRI. So this is a pretty good advantage. And I think probably a decent trade off to reduce our startup time, but it is a 15% reduction and there's no inherent reason for that to happen. So what happened? Why did we take a performance hit? So op caret is kind of a demon case for running things. It basically decodes these op codes, corresponding to instructions for the NES hardware, and then in a tight loop, it uses the op code as an index to a dispatch table and then uses metaprogramming to dynamically dispatch over send using send. So two things are going on here. One, it splats the results of the dispatch table and this generates a very or creates a very small array. And the substrate VM doesn't optimize the creation of small arrays quite as well as the Graal VM does. So with Graal, we can actually avoid the application of the array in some cases and just access the members directly. And the second thing is, is the send call becomes megamorphic very quickly, which means we can't use our inline caching, which is a way that basically all the Ruby implementations are able to optimize method calls. So if you, Chuffle Ruby in particular, is able to take it a step further and optimize metaprogramming with inline caching, which none of the other implementations are able to do. And I gave a talk on this a couple of years ago at Ruby Kaigi, if you're interested in how that works. But the point is, is because this goes megamorphic, we're not able to use that inline cache even at that level. So we have to do method invocation. And as it turns out, calling functions is a tad slower on the substrate VM than it is on the Graal VM. So these are the things the substrate VM team is aware of and intend on fixing. So in summary, I think Chuffle Ruby startup time is fixed. We're not as fast as MRI that is our goal. But I think the path we're on is a viable one. And I'm personally really excited to say that substrate VM is now publicly available. If you've been following Chuffle Ruby at all, one of the things we're often asked or criticized about is our startup time. And it's a fair criticism. I don't agree with anyone for that. But we addressed it, we would often say, hey, we got this this kind of thing on the side called the substrate VM that will just, you know, make startup time faster, don't worry about it. And it was beginning to look like it's going to be a little bit faster. But it's now publicly available. You can use it. And we're relying on it. And personally, I think what's also nifty about the substrate VM is it helps validate the approach we're taking with Chuffle Ruby, which is building on top of this Truffle AST interpreter framework. In addition to getting things like a debugger and profiler for free, now we get this awesome virtual machine that solves our startup time problem. And from the perspective of the Chuffle Ruby code base, we really don't have to do anything special to take advantage of that. There's maybe a half dozen different code paths where we need to disable some things because they rely on reflection and things like that, that aren't available in the substrate VM. But for the most part, the same exact code base targets both the Grawl VM and the substrate VM with very few modifications. So there is some future work here. The substrate VM currently doesn't support compressed oops. This is something they're aware of, and they're going to be fixing. If you're not familiar with it, it's an optimization that JVM already has 64-bit JVMs. So pointers are going to be 64-bits wide. But if you have a heap smaller than 32 gigabytes, you can actually represent those in 32-bits. So the substrate VM hasn't copied that optimization over yet, but when it does, pointers will consume half the amount of memory. Cache lines will be improved, things like that. They're looking to improve their array handling I already mentioned. There are things we can do on the Chuffle Ruby side too to take better advantage of substrate. Currently, we're only building in the core library into the image. We could do the standard library as well. There's a few hurdles there, but there are things we can clear. There's more stuff we could pre-compute and push into the native image generation process that we're not doing yet. Both of those would help improve our startup time a bit further. We ought to be able to reduce our overhead calls to native functions as well. So I mentioned we have this Chuffle NFI thing for making native calls, and that's really cool, but since we're building a static image, we ought to be able to just statically bind native functions and call them directly and avoid some of the overhead with dispatching those calls. And to preempt it a bit, we're often criticized for two things. One is our startup time, and the second is our memory consumption. We actually haven't really spent any time looking into memory consumption yet, but we believe that the substrate VM can also solve that problem for us. It's something that we need to look into a bit more. I have some slides here just to when I make them available so you can look at them. So here it tells you how to run the Chuffle Ruby SVM binary. Here's some information on our benchmark environment. I've provided some links to related talks if you're interested in learning a bit more. This here is a picture of some of the Grawl team. The Grawl VM team at large is actually a little over 50 people now. Oracle's invested some significant resources into the various projects here. So these are a lot of the people that were involved, but not all of them. We've had some alumni. We've had interns. We have university collaborations. The basic point here is this is a lot of work and way more than I'm able to do on my own. So I'd like to just acknowledge the efforts of the other contributors. In particular on the Chuffle Ruby team, there's Chris Seton, Petter Halupa, Malcolm McGregor, Ben Wanda Lowe's, and Brandon Fish. And from the substrate VM team, Vojnjavinovic was really helpful in pulling this talk together. And yeah, here's my contact info. I love talking about this stuff. If you're interested in learning more about Chuffle Ruby, please reach out. If you're interested in trying to run your application or your library with Chuffle Ruby, we're always looking for new use cases. I'm happy to work with you and see if your dependencies work with us. If not, we can try to get that resolved. And Chuffle Ruby is completely open source, so you can also check out the project. And yeah, that's it. Anyone has any questions? I think we have a few minutes left. The question was, how is Rails support? So that's coming along. The problem for us is the database adapters. So Chuffle Ruby is a new implementation, and the database drivers basically are shipped as extensions. There's a pure Ruby version of the Postgres driver, but for the most part, the database drivers have a native component to it. So MRI has native extensions for all the drivers, and JRuby has native Java extensions for all the drivers. The problem is the extension APIs aren't really APIs. An MRI extension literally is taking MRIs internal functions and allowing you to call into the run time. And the same is true of JRuby with Java extensions. So Chuffle Ruby has its own run time. We're not implemented as MRI, and we're not implemented as JRuby. So our options were pick one of those and become compatible with it, or try to convince the ecosystem to adopt yet a third extension API. We've decided to go down the path of working with compatibility with MRI, which means we're now taking functions that implement MRI and pretending they're an API and stubbing in our own implementations of them. And that works in progressing nicely, but it's a really large surface area, because there's no defined public API. We need to just figure out what people are using and then support them. And that's how we've gotten open SSL and the JSON extension and local key area running. So it's really just a matter of time at this point. The way we're doing it is with yet another Chuffle language called SuLong, which is an LLVM bit code interpreter. So we use Clang to compile the extensions down to LLVM bit code, and then rather than generate machine code from that, SuLong interprets the bit code, generates AST nodes, and we use Chuffle's interop functionality to combine that with the Chuffle Ruby AST nodes, and then just all works together really nicely. So we have issued our first SQL call with the MySQL adapter. We're making progress with the Postgres one right now, and I believe someone's been chipping away at the SQLite 3.1. But once that's done, we really should be close to running all of Rails. The rest of it, like active model, active support, action mailer, we handle those well. Spring is going to be problematic. I'm not sure if we'll ever really run that, but we might be able to do it with the substrate VM image. Yeah, yeah, I'm not sure if they got picked up. So basically the question was, is if startup time dominates the application profile, why didn't the Hello World application show a larger effect than running the test suite, which you would think pays the startup time once, and then continues to run tests. As it turns out, that test suite does forking and exacting, so you actually pay the startup time multiple times throughout the course of the run. And then a secondary effect is garbage collection. So when you're running Hello World, you exit very quickly. There's not an opportunity to generate a whole lot of garbage. The substrate VM garbage collector is different than the JVM one, and we haven't tuned it quite as well yet. So the question was, if you're statically linking the JDK into the resulting binary, does that limit the Java calls you can make from Ruby, right? So we do have to forego the Java interrupt feature I mentioned, because that relies on runtime reflection. The substrate VM actually recently gained limited reflection capability, so we might be able to start providing interop for classes that are already linked, but we can't dynamically load classes. So if you're a Java developer, you might be accustomed to dropping a jar in and implementing like a new logger interface or something like that. You wouldn't be able to dynamically load code. But from the perspective of truffle Ruby, what we need from the JDK is known in advance, so we don't have any problems that way. So the question is, what did Oracle hope to get out of this? So I work for Oracle Labs, which is a research group within Oracle, and our mandate is a bit different than what you get from the product groups. So we're supposed to kind of identify and investigate new technologies that could be useful to other product groups. I think it's best to maybe look at this as the Grawl team as a whole. So we have implementations of JavaScript, our Ruby, we have this Sulang project for LLVM bitcode, we have the native function interface, we have truffle, we have Grawl, which is our JIT compiler, we have the substrate VM, and all these things kind of work together with one another. So the various languages help improve the development of both truffle and Grawl. If we only had one language, you risk overfitting and things like that. So Grawl is actually now shipping as part of Java 9, so some of this is already starting to trickle its way back up into other parts of the organization. But yeah, I think it's about it, if that answers what you're looking for. Okay, so I think the question is, you're going to spend a fair bit of time doing the native image generation. How does that compare to the time you saved from startup? I think it's best to think of this like what you would do with MRI. So you would compile MRI and then you don't compile it again unless a new version is released. So you're not going to compile truffle Ruby with the substrate VM every time you run your application. What we're compiling is the interpreter, not the application being run in the interpreter. So we actually ship pre-compiled versions with the Grawl VM distribution I mentioned previously. But if you wanted to build it yourself, you could, but you would only rebuild it if you actually modified any of the core implementation files. So the question is, because we implement Ruby in Ruby, does that limit the ability for application code to monkey patch core classes, right? Right, so there's no difference by the time we start running end user code, because all we would have done otherwise was parse the code and generate the AST anyway. So we're just cutting off that parsing and AST generation phase. When we start running end user code at that point, our core libraries are already initialized and if you want to monkey patch them, that works just fine. So the question is, do we want, have we looked at using, I guess, kind of an application cacher launch, yeah, background demon for running things faster? I guess I haven't looked at it too much. I know there's like drip for just Java applications in general and you could do that for trouble with Ruby probably, but our approach was to just try to deal with startup time and, you know, do what MRI is able to do. I guess even with MRI you kind of do this with spring and rails, but we're going the substrate route and maybe, maybe drip would work, but I'm entirely sure. All right, well thank you everybody.