 Hello. So, now we will have another talk about JRuby, which will be presented by Charles and Tom. They are both contributors and mentors of the JRuby project. They work at Red Hat. I think they are kind of veteran of first-dem in the law of beers. I'm sure that's why they come to first-dem every year. So, please come down with me. Alright, hopefully this microphone is picking up okay. Let's get right into this. We've already pretty much been introduced. We've been working on JRuby full-time for like 11 years. Since 2006. Yeah, since 2006. So, we've been doing this for a while. We've both been doing Java since 1.0, so like 20 years of Java development. And we still love it. Yes, and we love coming to conferences to meet folks and to find new beers to try. So, first off, we're going to do a quick update on where we stand with JRuby, current versions and whatnot. Current version of JRuby is 917, part of our JRuby 9000 series. We just released that a few weeks ago. Lots of issues, but the same number of issues as the previous release. So, each time it's like in the 70 to 80 range of issues that we fixed. And this one appears to have solved a lot of problems that people were having with the previous several releases. Not as many problems, not a lot of reports coming in from folks. It seems like it's going fairly well. Here's our roadmap going forward. So, with JRuby 9.1, we've made the move to Ruby 2.3 compatibility. So, for the first time ever, we were only a couple months off from CRuby releasing their new version. And then we had a JRuby that supported it. We are working on 2.4 support, which will be part of the 9.2 series. And we're only going to maintain one of those minor versions. So, once we have 9.2 out, the 9.1 series won't be maintaining anymore. We'll just keep people on the latest version of Ruby. I think we might still put 918 out. Yeah, it's a possibility when we do one more maintenance release, because 9.2 is still on its way. And I'll talk about that next. Once we do get 9.2 out, we're hoping to also resolve anybody's remaining migration issues from JRuby 1.7. Because we want to kind of kill that version and get people all on 9000. Is there anyone using JRuby 1.7 here? There's a couple. Okay. Yeah, if you are on 1.7 and haven't tried 9000, give it a shot. Let us know if there's any problems, things that seem like they need to be fixed. Because that's a big goal for us. Okay, so JRuby 9.2, like I was just talking about, we support Ruby 2.4 features. And there's the GitHub issue if you want to track progress. I was just kind of estimating probably about 75% complete. Of the bullets that are listed on there, which are basically out of Ruby's news file, and list all the updated features, about half of those bullets are done. But a lot of them are really small bullets. Some of them are really big things, like big num and fixed num are now the same class. They're all just integer at this point. And that's done. So it's more than 50%, less than 100%, and roughly 75, I'd say. And like I said, we want to try and resolve any remaining 1.7 migration issues. I'd like to be able to get this mostly finished by the end of the month. We'll evaluate how many features we've got finished and how close we are. We may not have everything done for the initial 9.2 release, but we'll get the big bullets taken care of. This is one of our favorite slides. A couple years back, we asked for folks to tweet us their company logos if they're using JRuby in production. And this is one day's worth of responses to that tweet. It really was unbelievable to see how many people and how many different companies out there, including big ones like BBC News, all the election results that you get reported on BBC News, that's a JRuby application. At Square, they were using JRuby for all of the transactions that were going through the system. So pretty much any time you paid with a Square device, it was going through JRuby and so on. And then a couple other more recent ones that are still big JRuby fans, Breakman Pro, which is the commercial version of Breakman for doing code analysis stuff. And Puppet Server actually has used JRuby on the server side. So there's lots of folks out there. And I kind of like this tweet from Zach. The key point here, if you want isomorphism, if you want a Ruby implementation today that really can replace MRI for your applications, JRuby is the only option at this point. And it can make your application a lot better, as we'll show as we go on here. So our number one priority has always been to users. This means that we don't necessarily have as much time as we'd like to work on performance and other things. But it means we're focusing on usability of JRuby first, and then edge case features, performance will come after that. But always trying to make sure that users have a good experience with JRuby. So along with making sure that it works for users and they can use JRuby is trying to keep up with compatibility. And as I mentioned, we're very excited that this past year we actually are at current Ruby version finally. We always lagged behind a couple versions because there was a lot of catch-up work to do. We've finally caught that up. We also run more tests than any other Ruby implementation. We run Ruby spec, the specs that were Ruby spec in our Ruby slash spec. We run CRuby MRI test suite, which is some 100,000 assertions. We run our own suite that's for things that are kind of weird to us specific to JRuby or related to integrating with the JDM, JRuby specific features. And then we permute a lot of these over our different runtime modes. We have an interpreter, we have a JIT, and so some of these suites run three or four times in CI. I think our entire Travis job takes, according to Travis, is something in the neighborhood of four to five hours of CPU time to run the entire suite. And we run seven and eight too for Java. And we run seven and eight to try making sure that we're still compatible with older Java versions. So we work very hard on this. There are also a lot of challenges to this. Probably the biggest one is C extensions that continue to plague the C world. If you saw the previous talk, C extensions are a pain across platforms. Sometimes they just don't build. Sometimes on Windows you don't have tools to even build them with. And then, of course, on JRuby, we don't have support for CRuby's extension API. It's very invasive, very low level. You can go and poke direct pointers to objects in memory. We had experimental support for it for a while, but it just never really panned out. Sometimes the library authors are actually a challenge. There's a lot of libraries that maybe you're just uninterested in supporting JRuby, so they don't test on it, some minor quirks or differences in behavior, and then we get bug reports on the JRuby project. We try to work with all authors and make sure that JRuby runs their stuff well, try to get more libraries running JRuby in Travis or in CI. But there's some challenges there. There are a few library authors that are simply hostile to anything that's JVM related. They were beaten as a child by the JVM, and so now they're never going to support it. But usually most folks, if we reach out and talk to them, or if you, as Ruby users, reach out and say I'd really like JRuby support, they're usually pretty welcome to that idea. And then, of course, this takes a lot of time, as I mentioned, which takes a while from other performance stuff. We had to finally be caught up because it means we can start focusing on those things more. So a few numbers on where we stand as far as running tests. On the Ruby specs, the base language Ruby specs, we run 98, you know, it's approaching 99% on some of this stuff. The things that aren't in there are usually either minor items. It's something that we don't run properly in Ruby spec, but has never been reported. Well, we don't spin our wheels wasting time on edge features that no one's using. There are also some rare features or features that are difficult to do on the JVM, like continuations and so on, and we just don't run those tests. So that lowers our percentages a little bit. We could be another percent higher if we just resupported flip-flop operator. Yeah, like flip-flop. How many Rubyists have ever used the flip-flop operator? That's exactly why we don't bother implementing that particular feature. But go look it up and try to figure out how to wrap your head around what's actually doing. I broke it and then I didn't fix it because I wanted to see when someone would actually report it as an issue, and it's been like two years. Yeah, so the core stuff is a little bit lower. This includes all of the core classes, string, array, and so on. The base ones that you would expect, like string and array, are pretty darn close to 100% passing on these specs. But then you get into weird edge cases, again, like continuations, some oddities of fibers, things like fork that are not supported on the JVM. That's where we end up losing a few dings here. But we're really happy with these numbers and we continue to improve this. MRI's tests is an extremely large suite. Most new features and changes that go into CRuby usually go into this test suite. We'd like for more of them to go into the Ruby specs because they're nice and readable. We'd like to have just one suite. But the truth of the matter is today, if you're not running both Ruby specs and MRI's tests, you're not getting the coverage that you could. You're really missing out on a lot of stuff. So we keep our tests updated and pull in CRuby's tests. We have a system for excluding anything that we fail, and we pass and run about 83% of the tests that are in MRI's suite. Which is a pretty good number. Again, lots of weird stuff in there. And some of these libraries will never support, like all the GDB and other ones that are C bindings. Open SSL we test separately, so that's not part of this metric either. And then, of course, one of the biggest things for compatibility is making sure that Rails works. Now, up through Rails 4, we've done a pretty good job of keeping up with it. And Rails 4 applications should work great on JVM. Rails 5 made enough changes in how Active Records done that we're still kind of catching up on that. But we have done some of the initial work. The SQL Lite support is very good. As you can see here, only 13 failures, three errors out of all of those tests for SQL Lite. And that sets the baseline for the other database adapters. It shouldn't be hard for us to also get MySQL Postgres and the other ones supported just as well. And so for the first time, we should be able to catch up and get 100% on Active Record for all these databases. And then, of course, everything except Active Record is nearly 100% on here. All right, so I went and updated a document on our Wiki on how we were doing deoptimization. And then Chris Seton from the Ruby Truffle team went and noticed that. And I thought, it'd be fun to talk about deoptimization. So why are we even talking about deoptimization? Shouldn't we be optimizing code? So let's figure this out. If you think about the universe of all the optimizations you can do with software, and then you consider what you can do initially with Ruby, it's quite small. And that's because Ruby's not amenable to static analysis. It's not like C. So the real problem is that we lack the additional information we need to actually do these bigger optimizations. So like most dynamic languages, we make up for that by doing profiling. We just study its execution, try to figure out if one method's always calling the same method, or if a variable's always affix num, we maybe can do more optimizations. All right, however, we only know what we've seen. Things can change. So we're running some code 10 million times. This variable is affix num, and then suddenly it becomes a big num. Well, we have to cope with that because you want your program to actually run correctly. So in that case, we have to go and do some check or check a precondition. And if that precondition ever fails, then we have to fall back to a correct version of our program. So deoptimization is very important to maintain correctness, but it also has a nice secondary benefit that you can be very aggressive with the optimizations you try to do initially to try to get great performance. And if it works out, then 10 times performance. If it doesn't work out, then you fall back to safe, and then maybe you try something else, and this is where we can see a warm-up occurring in VMs. Another nice thing about being able to deoptimize is that let's go back to that example of 10 million fix nums. It becomes a big num, but some numeric algorithm where it just continues to increase. Now, all of a sudden, it's always a big num. Well, maybe we can do an optimization there. As you'll see later, code that has the ability to deoptimize can sometimes be simpler than code that can't. And the most important VM for us is the JVM. And so Charlie replied to Chris saying, we've been doing deoptimization forever in JVM. I mean, the JVM has this as part of its way of optimizing job. It can do aggressive optimizations and back off. And since we actually have been emitting JVM bytecode for Ruby for years, since 2007 or so, we've actually been doing deoptimization since then. We were the first JIT for Ruby anywhere, just because we could run on top of the JVM. So now we'll take a step back here and kind of talk through some of the steps, the strategies we've been using to improve performance on JRuby these days. The first thing that we did when we released 9000 last year, I think the first release of it, we introduced a new IR, a new intermediate representation for Ruby code. Previously, we just parsed to an AST and then we walked the AST and then eventually the JIT will walk the AST and turn it into bytecode, and that's about as far as it would go. The new IR, however, is more like a standard CS-101 compiler. It has optimization passes. We have a data flow graph, a control flow graph. We can see how the code actually works and get a much better picture of it from a static view. And now, of course, like Tom mentioned, being able to do some more profiling, all because we've got this IR that's much more of a traditional compiler architecture. So then we've got our IR and we can do our own passes and our own compiler optimizations, textbook sort of optimizations. We still, of course, then also JIT to the JVM, JIT down to JVM bytecode, and we've been doing this for a long time, but the code that we're producing now is much better because of the IR, because it can improve the code before we feed it to the JVM. So we emit JVM bytecode, the JVM takes over, and then depending on which one you're running, eventually it will also JIT our bytecode down into native code. So JIT was the first native JIT for Ruby as a result of that. We try to make the bytecode as simple as possible and make it fit into the JVM, and sometimes we yell at the JVM with flags and other tweaks and suggestions to try and get it to work, but this is kind of a wrestling match. We have a very blunt tool for expressing Ruby across that boundary to the JVM, and sometimes we don't get the performance that we want. But it's certainly a lot faster than the interpreter anywhere from 2 to 10 times faster depending on what kind of loads you're running. So more recently in Java 7, a feature was introduced called Invoke Dynamic. Invoke Dynamic is basically a feature for doing fast dynamic calls at the JVM level and having them optimize an inline like Java does. And we've been using this heavily over the years. Much of J Ruby right now uses it for constant lookup, for method calls, for instance variables, and this gives us another three to five times improvement over just the JIT alone. So the JVM gives us a lot. So why are we working on our own de-optimization? Basically just to make up for a few things that the JVM just can't provide for us today. In particular, if we call a method and supply block, that's something that the JVM can't inline for us, and that's a pretty big penalty. Which also affects Java lambdas as well, same sort of problem. You've got one method that receives lots of different lambdas. It doesn't know how to optimize it as well. Another one is numeric specialization, realizing that a fixed num is really just a primitive Java long. And actually the JVM can do this. It's just we just have a little too much code for it to actually be able to see it. Third one, slightly less useful, but if we go and call us the square bracket method, we don't know which version of the square bracket method there is. And there's one in the standard library that actually sets a global variable as a side effect. So we always have to account for the fact that we might have to set the side effect. Now with de-optimization, we can call, and then if we realize it's that version, we'll de-optimize and then lazily add that state. So we don't need to set it up every time just in case we call that one weird method. We can do it after the fact. Before we can go into this more, we need to talk a little more about our internal representation itself. As Charlie pointed out, we now have a traditional compiler architecture. Back in the one seven days, we did everything with the abstract syntax tree. If you open up any compilers book, these five boxes on the left would be what you'd read in the first chapter. But for today, we'll just look at our IR instructions themselves. They're just some virtual machine instructions we made that represent Ruby semantics. So for the method on the left, here's some instructions on the right that represent it. I'll just go over a few. So we check arity. If we pass more than one argument, then this would raise an exception. So that's what that instruction does. Likewise, receive arg zero. We take the zeroth argument passed in and bind it to the local variable B. We have a call instruction. So B's getting the double equal method called on it and passing it one. B equals one. We have temporary variables, which we use for various things. We have a bunch of branch instructions. So here we're just checking to see if this is false or not. And if it is false, then we go down our else branch. So I think the assembly is pretty easy to grok. In truth, we have a lot more information on every instruction. I don't expect you to be able to read that. But each one of those yellow boxes is an instruction. And this is the ideal graph visualizer tool. We just recently started using this. It's great. You can just pop over, select which method you want to look at, see how it compiled. You get a nice visualization of flow. Here's all the compiler passes we run for this compilation unit. And that pink bar across the top, you can drag that and have it span whatever number of passes you want. And then it'll show you which code's been added and which code's been removed or changed. So this is a super fun new toy. So we can actually use this to see whether our optimizations are doing what they're supposed to, step by step, all along the way. So we'll talk about profiling a little bit. I find that a lot of people don't know what the term call site means. It's just the location of the call or the site of the call. So line one, object.executes our call site. So we're only interested in call sites right now for our profilers. So we just record which methods being called at each time and how many times it's been called. That count is really just a mechanism for determining hotness. And it's absolutely the simplest one that we could have picked. So we might change that in the future, but it's working okay for right now. But we're really interested in monomorphic call sites. Other VMs can deal with things with more than one site. And for us, again, this is the simplest thing we can do and it gets the highest payback. The vast majority of calls in Ruby are always the same method. So once we have monomorphic call sites, we can look to see if we can do numeric specialization and convert that to Java primitive math instead of actually calling through the more expensive Ruby versions of those methods. And then the 500-pound gorilla of optimizations, we can start inlining methods. Basically, just grab the body out of that method you're calling and just paste it in place of where the call was. Then you don't have to set up a stack frame or any other data structure setup because you're still executing in the original method. Now just to be clear, the JVM has been doing inlining for us for years because we emit bytecode. But sometimes the method's too big. Sometimes it's Thursday. It could be anything that the JVM decides not to do it. So this is our way of kind of getting back at the JVM saying if you're not going to inline this, well, we're going to inline it and you're just going to take it. Yeah, and we really want to focus on the calls where we pass a block because that's something that the JVM never inlines. Yeah, particularly bad case. So this was our original inliner. So we have some Ruby code on the left just counting down from a million, calling this decrement one method. None on the right is pseudocode because I don't want to show you IR. It wouldn't fit on this slide anyways, so let's just walk through how this worked. This was a non-deoptimizing pass. So we still have that checked to make sure that something didn't change. If it didn't change, then we just execute the body of that method. Wow, transitions are weird here on the screen. Otherwise, we just have to fall back and just start doing a method call again. And so this was good because we didn't have to deoptimize and it ended up being surprisingly simple to write. But there's a couple problems. One, it increases the code size a bit. And if you look at these top three uses of I, you can figure out that this is going to be numeric. But on the failure case, we don't know what that's going to be. And that defeats any other optimizations after that. So here's our current inliner. Instead of having that if-else statement, we're going to throw an exception. And that exception is going to mean that we have to go back to a safe version. But now, if you look at the three uses of I, we now have a chance to do some numeric specialization. And I guess I call this a virtuous cycle because this optimization allowed us to see something else that we couldn't see before. And that's pretty awesome. Here's the basic strategy. When we generate the optimized version, when we create that guard to make sure that whether we need to fall back or not, it records where we need to go back into the safe method. Instruction pointer, basically. Yeah. And we throw an exception, and then we dump all the temporary variables that we have. And then the safe version will be able to use those same ones because it was a property we have in IR that all temp variables are used for a single purpose. So the safe version can still consume those. If we actually look at the implementation, I did this with Ruby's pseudo code again because it would be a mixture of Java and IR. So the inline methods in the gray box, but when the guard fails, we'll raise an exception with where it's supposed to go. We'll set some state to make sure that the method cleanup stuff doesn't execute to tear things down. You'll only want to unlock a lock once. So if we're backing off for de-optimization purposes, we have to say we're going to do this finally sure block later on. Yeah, the safe version is going to handle it for us. And that's the thing next is we call into the safe version saying we want to start at that particular location, and then we dump the temp fars. This ended up being a lot simpler than I had originally anticipated. But at this point, we created a branch. We got this working, we think. For tracing through stuff, it seems to work. But the real goal here was to try to figure out if we had any big problems with this strategy. And so far, it seems like it's working out really well. So we'll make it robust and clean next. But we always have to give numbers because people... That's the exciting part, right? It's time to wake up. So you can pretend for this micro benchmark that the small loops are array each. It's something that's just iterating over elements and calling a block constantly. And then big loop is just calling that as a monomorphic call. So in C Ruby, this took about 34 seconds to run. And for us, it took 27.2. But once we inline the method and inline that block back, it went down to 6.2 seconds. Everyone knows about the Ruby 3x3 thing? Yeah, they're trying to make Ruby 3 three times faster than Ruby 2. I feel like I'm such a dick. But of course, we can add our invoke dynamic flags. And you can see that the before is just a little bit faster than it was. And now after... It's just getting better and better, isn't it? So this is pretty awesome. So there's some limitations. We only inline a single call site per method. We'll get rid of that limitation once things are solid. As we've mentioned a couple times already, typically the JVM does such a good job inlining method to method calls. It only makes sense to do this when we're passing a block. However, if we can do numeric specialization, some other optimizations, it might make more sense for us to start attempting that with regular method calls. And it's early days. But hopefully soon we'll be able to get this out as an option for users. And I repeat, this is on a branch and do not expect all your code to get forward six times faster. We found one sticky area that we improved quite a bit. And so I mentioned there was the very strategies that we followed over the years. One that we're starting to explore now, in addition to jitting in better compiler and invoke dynamic, is to actually be able to tweak the JIT that the JVM uses, give it better information about how Ruby code works. And so the experiment that I did before FOSDEM, just last week literally, was to try using the Grawl JIT, which is a pure Java JIT that is now getting a lot of attention. Truffle Ruby depends on it. A lot of other projects in the JVM world are depending on this JIT framework. But the nice thing about it is you can swap it in to a JVM through some new APIs that are coming in Java 9. And we can directly influence how the JIT works. We can give it better information about that this is a number or that this is a particular kind of loop or that we want to specialize some code. And so this has actually started to pay off pretty well. We've managed to get Grawl working, knowing a bit more about JRuby and then run some benchmarks through it. And so if we look at Tom's example, just throwing a few minor tweaks at Grawl and running it for even this simple example, which we didn't think would have a lot of benefit, we've got 6.8 faster. So let's just say seven times for that. It's within the margin of error there. And let's actually look at this a little bit more graphically. So we'll wrap this up and kind of show what sort of progress we've made. So with the blocky loop bench of Tom's, we've got JRuby's JIT that does a certain amount of improvement for us. This is how many times faster we are than CRuby 2.4, the current version of it. So that gets us like roughly 1.25 times faster. If we add invoke dynamic, it bumps up a little bit and we get like 1.35 or 1.4 times faster. Then we start doing the inlining and that makes a huge difference. The cost of the block overhead and that intermediate call really hurts us, hurts us a lot more than CRuby. So being able to inline takes a lot of that away. Again, Indy here, like Tom was talking about, the virtuous cycle, now that we're able to inline, invoke dynamic is actually doing significantly better job of optimizing this code. So we get another boost up over seven times faster than CRuby here. And then the growl thing that I mentioned, it's a smaller bump for this particular benchmark, but that even brings things up. So we're getting close to an 8.5, 9 times faster on what is a fairly standard sort of piece of Ruby code. Now, we really wanted to kind of see how far we could push this growl tweak and prove it out that it's going to be something worth exploring in the future. So we do, we got a numeric benchmark that just generates a Mandelbrough. We're running it with 750 here. Yeah, you should mention the problem with Ruby fixed nums versus... Yes, so we mentioned a couple times about numeric specialization. The reason that we want to be able to do our own specialization and turn it into Java math is because we need to pass objects all over. It's a dynamic language, everything's treated like an object. On the JVM, that means everything has to be an object. So every time you do one plus one, we will construct a new object for fixed num number two, basically. And that becomes a huge overhead for numeric algorithm as you'd imagine. We're heating up the heap too much. We're making the GC do a lot of work, and just allocating all those objects takes a lot of time. So this is a good example of what we can do when we've got a JVM JIT that can actually see that we don't need these objects and optimize around it, get rid of the allocation. So let's take a look with J Ruby just with the JIT. We get about 1.5 times faster than C Ruby here. They don't have the object overhead we do, but the fact that the JIT does such a good job makes up for a lot of the overhead of these fixed num objects that we have floating around in the system. And in this case, if we throw invoke dynamic at it, much better again. It doubles it over the non-indie version, the plain JIT. And this is fairly typical for numeric heavy algorithms. We think the JVM is getting a little bit better visibility into these numbers. Now the real payoff is when we use Gral to do this. So this is J Ruby plus JIT plus invoke dynamic plus the Gral tweaks that I've been playing with, which is literally a two-line patch to Gral to make it a little bit more aware of our numeric types. And this is now getting much more interesting. So now we're talking about J Ruby being 23 times faster than C Ruby at this point. Competitive with projects like Truffle Ruby and some of the other optimizing JITs that are out there, we're finally able to get some time to work on this and it's looking very promising. So we're going to see more on this over the next year. And if you want to help experiment and play with this stuff, it wasn't hard to do this. So we welcome contributions. All right, so wrapping things up here. Compatibility we're really happy with. We've managed to catch up with C Ruby, hope to have Ruby 2.4 out only a month or two or three after the actual official release. Performance is solid right now. You can go online and see. You can take your J Ruby application and throw it on, or you take your Ruby application, throw it on to J Ruby and get two, three, five, ten times performance depending on what you're doing. Invoke Dynamic has given us a lot more runway for just plain JVM optimizations. But there's a lot more cool stuff we could do. Obviously we're doing our own inline or doing our own profiling and advising the JIT directly show huge promise for making J Ruby that much faster than it was even a year ago. So try it out. Let us know if your applications don't work and if there's some problems that we can look into. Let us know if it goes well. We usually only hear from people when things are broken. So it'd be really nice if people would say it worked the first time and you don't, it's all you have to do like tweet to us or something just so we know that you're out there and we're not just all problems. And let us know, let us know how things are going and I think you might be able to do with J Ruby. That's all we've got. And there's some contact information for us. Thanks. We have plenty of time for questions. Plenty of time for questions. Alright, who's first? Right there in the middle. I'd like to ask about Java 9. I know that some of the raw stuff is required in Java 9 but is Java 9 bringing any other things that are exciting to J Ruby or not really? So are other things that are coming in Java 9? Yeah, other things that are coming in Java 9 along with the Grawl and JVMCI stuff. So first clarifying, Grawl, I'm pretty sure it's not going to be in Java 9 but the API that allows you to just swap it in and replace the existing JIT that will be there. So in theory, once Java 9 is out in six months a year, I'm not sure what they pushed it back to you now, but once Java 9 is out you could be running J Ruby on Java 9, maybe GEM install J Ruby Grawl and then we'll just install that as the JIT and you get that 23x performance. Ideally, that's what we'd like to be able to do. Other than that ability to swap the JIT in, we're interested in how Jigsaw might make things work better for us if there's ways that we can reduce some of our dependency loading issues. Java 9 is supposed to also come with a closed source ahead-of-time compiler that we're interested in playing with to see if we can get our start-up time a little bit closer to C Ruby. And then there's various language features that come along. We don't get to use a lot of that stuff in J Ruby proper because we're always supporting at least one Java version back. So right now we support Java 7 which means we can't use Java 8 lambdas. When Java 9 comes out we'll reevaluate that maybe J Ruby 9.3 will be only plus. Then we can start getting more of those cool features but the JVMCI thing is really the biggest promising item for us. Do you recall if there's any file system related improvements? Not that I know of. It seems like there's always one or two things. Yeah, there's a lot of those little system level things. I think they've done some improvements to the process APIs maybe. Java 9 is supposed to have some early versions of a Java built-in for calling C code. Not exposed as a public API yet but we're interested in playing with that and trying to hook into it. Other questions? Yes. Is there some good content like talks or blog posts about how C Ruby and Comparison to J Ruby memory-wise? Memory-wise. So comparisons of C Ruby and J Ruby memory-wise. So in general if you're doing just a single process application we use quite a bit more memory at least twice as much memory as C Ruby for a typical Rails application and most of that is because the JVMGC is very advanced and it needs lots of room to breathe. The JIT takes up a lot of space. Just the inner workings of the JVM itself take up more space than MRI does. But what we've found is people don't typically run just one C Ruby Rails instance. They usually run five or ten or five hundred depending on how big your application is. And in those cases we very quickly will do better on memory use. You can take a single J Ruby instance, run 500 threads through it rather than 500 processes and that 500 meg application is now your entire site. Don't have to worry about all those processes anymore. So that's kind of the rough breakdown. We try every release to look for egregious overuse of memory and reduce that down. But there's only so much we can do being on top of the JVM just the way that it works internally. So yeah if you're running a big application and can get rid of a bunch of MRI instances, we'll do great. If you have 300 MRI instances you should probably look at making your app thread safe and you'll save a lot of money. Try J Ruby. We've had stories where people have told us they've went from running 14 or 15 extra large down to... It was like 35 extra large instances on EC2 and anybody who's ever done an EC2 knows that that's a bank load of money right there every month. Like $35,000 or something. And they took that those 35 extra large instances with C Ruby, moved to J Ruby on like 15 mediums or something. Like $30,000 savings every month just by making the effort to move to J Ruby. And that was reduction in memory and better use of the resources. Alright anything else? Yeah Sure, go ahead. One question that I also noticed in my colleagues asked me is why doesn't J Ruby track Ruby point release? So why am I running with 9.1.7.0 Ruby 231 and not Ruby 233 right right. So the question is why don't we just follow the same versioning scheme that C Ruby does. Well the biggest answer is that we need the freedom to do our own patch releases for various fixes and so if it was a 231 and we use that as our version number do we do 2311 or 231-JRuby11 or something. It just starts to get confusing and we also have completely different issues like we'll have our own security releases that we need to have the freedom to rev our version number for that. Well and major releases if we want to break our APIs we have to pick a major number. Right, we wouldn't want to be bound to C Ruby's release schedule for major releases if we need to do a breaking change in the middle of a year or something. Yeah, but my question is also regarding the version that J Ruby declare that supports to the code not Right, right. So what do you mean by not tracking that then? Well if you open J Ruby right now and you check Ruby version it will say 231 Right, right. The question is why not 233 and what are we missing? Okay, so if you start up J Ruby right now with J Ruby 917 the most current version it's a couple minor versions of C Ruby support back. So it's 231 versus 233. So the number that we report is very roughly what the feature levels are for Ruby itself but more than that they represent which version of the standard library we ship along with it. We always try to be current with features and bugs and what not but Ruby's minor versions generally don't make feature changes so much. Standard library stuff may change. So really all you'd be missing out on is any of the patches that happened in the standard library and if there's any bugs that we might have ported over or something we need to catch up on those patches. So it's not much that you would miss out. And we should probably update that. Yeah. For 918. Well yeah for 918 if we do that and we'll have updated 24 standard library in 9-2 of course which is already a branch. Let me try it out. Anything else? Alright. You have guys some questions about how to approach of big rails have conversion like where to start? Bundle. That's the best. Bundle install and that would pretty much tell you the first steps you need to take. By far the first problem people run into is they've got C extensions that they may not even have known that they depended on that will not have a JRE version potentially. Most of the key ones do like JSON, Nokogiri, browser APIs that are out there usually have a JRE version that works fine. Sometimes there's a Ruby version that you can fall back on. And sometimes you need to go and see if there's a Java library you can replace that with if it's something that you have a direct dependency on. But we do like to know what those C extensions are. It's hard to tell which ones people are using. So yeah, bundle install, look for libraries that have C extensions or don't install on JRE Ruby and that's your first step. You can run your test suite and see what fails and what doesn't. Ideally if you get all the libraries to install we should pass all the tests exactly the same as MRI. If we don't it's probably our problem and you should come to us with a bug report at that point. Or if we know it's a known issue that there's a workaround for we can provide that information back to you. And then you go from there. Once you get past the C extension thing and you're using mostly pure Ruby stuff or J-Ruby versions this is the last 10% is what you're working on at that point. That's the biggest hurdle for folks to get across. Anything else? Alright, thanks again.