 on slides. My name is Tom Menlow. I'm Charles Nutter. We work on the JRuby project. Just quickly, how many people are here that's a percent of Ruby found? That's at least half. And how many people know very little about JRuby? Okay, well, I'm glad we put these slides at the beginning. So we'll do just a quick couple of minute description of what is JRuby. JRuby is quite simply just a Ruby implementation built on the Java platform. And because of that, you get all the advantages and disadvantages that come with being built on the Java platform. If you have an issue with garbage collection, you can go and change your garbage collector and tune different parameters to go and fix that. It's got an awesome jit, and it does a bunch of profiled optimizations, which I'll be talking about in a few minutes. Lots of tools. We run on crazy platforms. We run on open VMS. We run on an IBM mainframe. And probably the most visible change here against CRuby is that we fully can utilize native threads because Java platform provides native threads. Another major feature of JRuby is you can go and pull in any Java library and interact with that as if it was a Ruby library. So you'll think that you're actually calling a Ruby class, but in fact it'll be a Java class. And so imagine for a moment that you're trying to do something with PDF generation and the Ruby gem that you're using doesn't quite do it. You can always pop over and try iText, which is a popular Java PDF library, and probably get what you need. Likewise, you can actually just call into different languages and this different language will run in the same VM as JRuby itself. Probably the most popular example right now is a deployment style called mullet, like the hairstyle. So they do all the serious business with closure on the back, and they use JRuby on the front with rails. Kind of a reverse mullet, I guess. Party in the front, business in the back. Okay, so that was like a minute and a half description of JRuby. If you want to know more about why JRuby is interesting in those regards, there's lots of talks and lots of stuff online. We have lots of slides. We'll go more into depth on what JRuby 9000 is and what we're trying to do with it now. So our two main branches of development is master, which is for the upcoming JRuby 9000 release, which this talk is about. And we have a maintenance release for JRuby 17. For JRuby 17 itself, it's a multi-mode release. So if you want to run with JRuby 1.8 compatibility or 1.9 compatibility, you just pass a command line flag and it just behaves that way. We decided not to do that anymore. So for JRuby 9000, we're just going to follow the latest Ruby release, which we're targeting Ruby 2.2, which isn't actually out by sea Ruby yet. We're pretty excited about finally being caught up with MRI. So now we can kind of move in lockstep in the future. So we wanted to put out a preview release for this conference, and we didn't quite make it. But then we decided that that was a good thing because it'll give everyone here an opportunity to try JRuby and give us some feedback. Right. Well, the better preview as a result, as long as we get some of you folks to try things out and let us know what's going on. And as Charlie just said, it's really exciting. We plan on having a final release towards the beginning of January, which is only going to be a few weeks after sea Ruby puts out their 2.2 release. I think this is the first example I know of two implementations releasing that close together with new feature release. And we put out new releases of 1.7 every three to six weeks. So in the next week, you should see 1.717. So as I said, JRuby 9000 is going to target Ruby 2.2 support. We have a new runtime, which I'll be talking about in a couple minutes. And we've totally revamped IO and encoding support, which Charlie will be talking about, and removing a bunch of crap, which I just said. So I wanted to go and, since there's so many new people coming to RubyConf, I wanted to cover JRuby's warm up. This isn't real. But we do actually see similar statements like this online all the time. This is usually like people's first reaction to trying JRuby out. They run some tests and it doesn't run as fast. They gave it a quarter of a second. So let's start with three definitions. The first one is startup time. This is just how fast it takes to start executing anything at all. This is typically only relevant if you want to make a command line tool in Ruby, and it's not going to run long. After that, there's this middle phase where code is running, but it's not running as fast as it could run because your VM hasn't optimized it yet. But once it has optimized it, then you end up hitting this steady state where the VM's essentially done doing any optimizations. So a graph might help with this. So if we look at the blue line, this is CRuby. It's actually a pre-release of 2.2, but I think it's about as fast as it runs this particular red-black tree benchmark. But you'll notice across each iteration, the number of seconds that it takes is the same. It doesn't change. So there's no warm-up. If we look at the yellow line, this is JRuby with Java 7's invoke dynamic support. You'll see at the beginning it's quite slow, and so it's got a fairly dramatic warm-up curve, but by iteration six, it's sped up and it's running quite a bit faster. We aren't so happy with the current state of warm-up for invoke dynamic, so for right now our behavior is the red line, which uses almost nothing from invoke dynamic. But you can see that the warm-up curve in the initial startup time is pretty good. This is the current state of CRuby. They go in parse Ruby. They generate a set of virtual machine instructions, which represent Ruby semantics, and then they have an interpreter that just walks these instructions. Mott's hinted that in the next year or two that they'll probably make a jit, and it'll end up looking something like this, which is JRuby's current very broad architecture. But again, we go in parse Ruby. We generate a set of virtual machine instructions that represent Ruby semantics, though different instructions from MRI, and then we run an interpreter on it. Now, if we execute that method a lot, then we go and pass it off to the jit. We generate Java byte code, and then Java goes crazy on that and optimizes it. So some people might wonder why we don't just jit everything, make everything fast. Well, it comes back to that first definition of startup time. We need to get out of the gate as quickly as we can, because a major problem with JRuby that we've had for a while is startup time. Charlie's going to be talking a little bit about how we're addressing this now and in the future later. But the other thing is every time we go and jit a Ruby method, we generate a Java class, and then we have to pass that off to the JVM. Now, the JVM just keeps getting more and more classes. And JVM's smart. It knows that if it receives a class and it's not hot, it's not going to do anything with it. So it's just going to sit in the JVM's byte code interpreter. So it's just wasteful to even bother. But primarily startup time is the reason why we do this. Okay, so the recommendation for people that are new and want to do some benchmarking is you can use a tool like Perfer or benchmark IPS. These basically have these rehearsals which try to get rid of that warmup time from the final numbers that are reported. And so I spent several minutes to say just use these two tools. Some people might wonder why there's even a warmup time. It was somewhat obvious maybe from the previous slide that a mixed mode runtime, the interpreter is slower than the jits. So until it jits, it runs slower and then gets faster. But the JVM is also mixed mode. So once we generate Java byte code, that byte code's interpreted and then it gets compiled down to native code if it's hot. But the next thing I'm going to talk about is profile to optimizations. The JVM collects statistics as it's executing code. And if it collects the right statistics, it'll try some optimizations. So we'll go through a highly, highly contrived example. I'm not going to show Java byte code. I'm actually going to show Ruby pseudocode. And I'm fairly certain that these optimizations won't happen in the order that they happen or maybe they won't happen at all for this example. These are techniques that the JVM will do. And as we get better at generating good Ruby byte code, we'll take more and more advantage of them. They're all fairly common. So Java has a collection type called a vector. It's just a thread safe array. In this case, we're creating one that's three elements long. And then the body of this program is just creating a loop that just does something to the vector and then sets it back to zero. Pretty contrived. If we actually look at the reset method below, you can just see we just loop over all the elements and set them back to zero. So this thing runs maybe, it goes through the loop like 20,000 times or something like that, quite a bit. And then it decides it resets a hot method and it's going to inline it. So it basically just takes the method and moves it over into the body. Now we're no longer setting up a call, creating a frame, and passing a parameter through. It's just been moved back to where it's used. A little more time. Wow, this is really confusing. Looks totally different when it's not on that one. Anyways, no one cares. So it runs a little bit longer and all the vector methods are pretty hot. So it decides to go and inline those bodies. Now, this is fairly interesting and it gets to this cumulative optimization effect. Oh, I'll mention one other quick point before I say that. This lock is just to indicate that this is a thread safe method. So every call into that vector basically has to synchronize and that's what we're representing here. This will be important a little bit, in a little bit. But if we look at the while loop, we'll see that we now have this R dot length. And now this ends up being a static number. This array never changes inside the loop. We always know it's three. And so we can just elect to unroll the loop. Now we get rid of the I variable, the bounds check, and the addition. So now we're at this point. It runs a little bit longer and it's like, oh, I have three locks in a row against three contiguous pieces of memory. Let's course in the lock to a single lock. Then we run a little bit more and we're going, oh, this is just a memory copy. And it gets replaced by an array copy stub, which is in most common platforms just a handwritten piece of assembly that knows how to copy memory quickly. And you can imagine this might keep on going on for a while. Eventually the JIT stops because it's spent enough time on it. But this explains warm up pretty well, I think. The one thing I didn't say, and I want to put a plug out for this, Chris Seaton did a talk on deoptimizing JITs yesterday. You can't travel in time, but in a couple of weeks you should be able to watch the video. It explains this concept really well. Basically, as long as you can deoptimize an optimization, then you can just try crazy optimizations that can give you great speed. In the worst cases, you're going to lose a little bit of time trying to optimize it. And you can just try a conservative optimization if it doesn't work out. So it's how Hotspot gets really fast. Alright, so we mentioned that startup time has been one of our biggest problems and we've tried lots of different ways to mitigate this. The one that we are pointing most people to as a first line of attack for improving startup time is the dash-dev flag. There are plenty of flags that you can pass the JRuby and pass to the JVM that will tune it for startup rather than long-term performance. We kind of bundled together several of those flags, turning off JRuby's bytecode JIT, changing the JIT configurations on the JVM level. And this will usually improve performance roughly twice, or improve startup time about twice. So half as long to get going and running your application. Definitely the first thing you should try if you're playing with JRuby and it seems like startup's too slow. Throw this into JRuby Ops environment variable or something, and that should help quite a bit. The other things we're looking at is a project called Drip. Drip actually will spin up the next JVM, the next VM, in the background. So you run a command, it spins up another JVM behind the scenes, so it's ready to go for you. And there's various ways that we can tune this to run some Ruby code, pre-boot your application, very similar to forking preloaders or the spring preloader that's used in Rails 4.1. Hopefully we can get that to be part of standard Rails sometimes soon. To show a little bit more graphically how Drip works, say you've got a command that comes in, it goes into, okay, there's no VM waiting for it, so it starts up a new JVM, starts up a new JRuby instance, runs the command, but at the same time it starts up another JVM in the background to get ready for the subsequent command. Assuming you're probably going to run something again. And then the next one comes in and uses that VM and so on. Some numbers about how this improves it. This is the sad state of affairs of JRuby without any tweaks and startup time. This is rake minus t, just to print out all tasks in a dummy, like a bare Rails application. And it, you know, anywhere from five to ten times as slow to get going for this particular case. With dash dash dev, usually about a 2x improvement, depending on what you're doing it can be a little bit more or a little bit less. Drip has a couple different ways that we can run it. We can just have it start the JVM, which is this improvement. We can have it start the JVM and pre-boot JRuby itself. And then finally we can have it start the JVM, pre-boot JRuby, and perhaps boot your Rails application. Now we're actually getting to the point where we can run Rails commands at the command line as fast or faster than MRI, as long as you give it a little instructions on what it needs to pre-boot behind the scenes. And we'd like to work with Rails core, hopefully get this to be just a standard feature like the Spring pre-loading, so that every time you're using a Rails command with JRuby it does this automatically. We're both staring at Aaron. Yes, we're both looking at Aaron. Our new coworker, Aaron. Welcome to Red Hat, Aaron. So now we're going to talk about the new run time. It was uninventively named IR for internal representation. We didn't like working with abstract syntax trees and we wanted something that represented semantics better. And most importantly, we wanted something that if a new contributor wanted to contribute and they had taken a compilers course or read a book on compilers, the vocabulary would be the same, the algorithms would be the same, so they can jump on easily. It's also so we can get interns to come and work on the compiler. So in 1.7, all we did is we parsed code and we generated a syntax tree and then our interpreter and JIT would just walk that tree to do its thing. Now in the 9,000 world, we generate these virtual machine instructions which represent Ruby semantics. We generate some supplementary data structures like a control flow graph. Once we've made these, then we go into the optimization phase and we start running compiler passes which might start mutating these data structures and then we interpret those instructions. And as I said earlier, once things get hot enough then we pass it off to the byte code generation and we make it faster. It's very similar to what the JVM itself does, but we're doing it kind of above the level of the JVM. So here's our first look at instructions. I'll just go over a couple quickly. At instruction zero, we make sure that there's two required arguments. If we get more, we throw an error. From lines one through three, we just bind our local variables to the parameters that are passed in including having a special variable called block which will receive any blocks that are passed in. Let's see, line eight, we call the method plus on the receiver A and give it the argument C, A plus C. It's pretty easy to read, I think. During semantic analysis, we fix a lot of things up that cause a lot of complexity in the past with the AST. In this case, the dreaded Z super, which I hate. But we just propagate the arguments to the method down to the super call site and it's no longer a Z super, it's just a regular super. And now we can do, we can get rid of a lot of logic. The JIT can go and allocate that much more efficiently. Right, this form of Z super is surprisingly complicated to implement within any implementation. And with this analysis that we're able to do now, we boil it down to a very small portion of cases that actually need all that complexity. Most of them are just plain supers now. And that code's still icky. So when we decided to switch over and use the new runtime, I kind of forgot that setTraceFunk wasn't implemented at all. But then that was a pretty simple thing to correct. I just created a new instruction called Trace and I just instrumented in. It was very simple to do. Optimizations, if anyone's taken a compilers course, a lot of these compiler passes will look very familiar. And a lot more to come. I mean these are just some basic ones we've got now. Method and block inlining and unbox and wrap types won't be in the first release of IR but we actually have the passes implemented and they will be coming in point releases. Right, so we have prototyped basically reducing float operations down to actual low-level 64-bit floating point operations and it runs pretty close to what you would write if you were to write it in C or Java. Coming soon. So here's just a simple example of that last code snippet. First thing we notice is B's not used anywhere in the method and we also notice that the block's not used anywhere in the method. Boom. Dead code elimination gets rid of it. Here we notice that C is assigned to a constant value. So I was just learning keynote actions for when I made the slides. Forgive me. I can't take it out now. I think it's too good. But now line six is no longer necessary because we don't you see. And then last and probably least line number instructions are for indicating that when you have an error it prints out what line the error occurred on. Nothing can happen between one and two so we just ice that out. And we went from eleven instructions down to six or seven whatever. Less instructions is good. Right right. Well especially for the JIT side when I'm generating code. So the other big area that I spent way too much of my life working on this past spring is fixing up all of our remaining IO and in character transcoding and coding m17 n bugs. So a little a little background here. The IO class in C Ruby is really just a pretty thin wrapper around a file descriptor. Not even a file star like a buffered file. It's just a file descriptor with its own buffering logic. Pretty much a one to one mapping to file descriptor operations. And then the file star buffered operations they have extra logic for. The JVM gives us a different abstraction. It's called channel. Channel is largely a thin wrapper around a file descriptor with no buffering. And so initially we kind of had to emulate libc file descriptor operations by wrapping NIO channels and trying to you know fix them up so they'd look sort of like a actual libc operations. Looks sort of like this. We had a channel stream that represented a buffering like a file star. We had a channel descriptor that represented a virtual sort of file descriptor concept. And then the the JVM class is in there a channel and some read write buffering. And this worked pretty well except that except that there's a lot of issues with trying to emulate on top of this other Java construct these low level POSIX APIs a lot of these different behaviors just didn't exist. So then transcoding JDK also has transcoding support but because internally the JVM all uses all UTF-16 if we wanted to use that support we had to pass through UTF-16 and then back out to whatever encoding it was. So that's more overhead that we have to deal with. Extra copies of data around. Not so great. So lots of problems with this. I mentioned behavioral issues between a standard POSIX file descriptor API and the channels that was an issue. The transcoding that MRI does on reads and writes happens really directly against the buffers and that file descriptor. So we needed to be a little more low level to be able to emulate it exactly the same way that MRI does. Character array when we passed it through the character array sometimes those errors wouldn't match. It might fail on the way into the character array. It might fail on the way out of the character away. It was very difficult for us to have the same errors at the same time as MRI. And people actually reported these issues. We were reporting errors wrong for transcoding issues. And then like I said it's hard to emulate a lot of POSIX systems. So this is the new way. This is the work that I spent several months on over this past spring. We've basically taken the MRI way and pretty much wholesale ported it into JRuby all the way from the lowest levels of transcoding up through IO to have full compatibility and pretty much line by line we can match up with what MRI does. And we're going to have J actually walk us through this here. Ruby the Ruby J will walk us through the whole process here. So first of all we go to our IO object and we're looking for buffered characters that may have already been read in and transcoded. Assuming there's nothing there we'll proceed on to the next phase. We're going to look in the read buffer, the raw buffer that hasn't been transcoded. Still nothing there we actually need to go down to the file descriptor level and read in raw bytes off the wire. So J goes over here and he gets some bytes from the file descriptor and pulls them back into our read buffer. Now if we don't need any transcoding at this point these bytes may just come straight out and we avoid all the extra overhead. But we'll assume that we do have to transcode. Say it's UTF-8 on the file system and we want to pull it in as UTF-32 or something. So we pull the bytes over here, the transcoder works on them a little bit, turns it from encoding A to encoding B and stuffs it in the character buffer. Now we've got characters available, the work has all been done and we can pull those back up, create a string out of them and we're ready to go. This was a lot of work to implement this. There's a very interesting piece of code at the very heart of the transcoding subsystem. Are you talking about the IO or that slide? Well, I'm talking about both. So it's a challenging piece of code to write but it is working extremely well. We've managed to basically pass almost all of MRI's IO and transcoding tests now when we were very, very small percentage of them before. Performance is significantly better because we don't have the extra overhead of the JVM transcoding. And like I say, all the semantics of IO and transcoding should now match CRuby. Hopefully we've finally gotten passed all of these issues with encoding and making things match. The other part I want to talk about quick is the JIT work that we've done. So the old JIT, a little background on it here, up through JRuby 1.0, we only had an interpreter and we realized that if we were going to get fast on the JVM, we needed to start generating JVM bytecode. So JRuby 1.1 added the first version of our JVM bytecode JIT. It worked pretty well. It was my first compiler and I'm proud that it works reasonably well and there's lots of production code running on it. But it was a lot of code and it was really difficult to maintain. Plus all of the optimizations I did within the JIT were one-off, quick things. They didn't really apply to the interpreter. Only the JIT code got benefits out of it and they were hard to evolve and improve over time. Plus, you know, 16,000 lines of code. It was a lot of stuff to maintain. So the new JIT is probably about 95% done. That really translates into almost 100% of code. The last two or three things that don't work are features you may not even know about. So you probably aren't going to run into them. We will finish those up for final. And right now it's about 5.6,000 lines of code. It's not too bad. It can probably be shrunk down a little bit more. We haven't done a lot of cleanup and refactoring, but it is less code. But more importantly, we're able to push a lot of those optimizations that were one-offs, ugly pieces of code in the old JIT now happen at the IR level. So if the IR can see how to optimize something, both the interpreter and the JIT are going to have better code. And so you're going to get better warm-up curve and faster to full production and speed. And I think it's safe to say it's easier to debug the interpreter for an optimization than the JIT. Yes, definitely is. And it proves out those optimizations in both modes. So it's going to be a lot nicer. And there's a lot of runway to improve this too. Okay, so we're going to talk a little bit about tools. Obviously, there's a lots of great JVM tools. This is a slide about visual GC plug-in for visual VM. You can run JVM if you have a JVM install. You get this tool and then this plug-in basically gives you a live view of what the garbage collector is doing. It's filling up certain generations. Then it decides to clear it out, goes back down to zero. You can see your graphs over time of how much memory is being taken up, how much CPU is being spent on garbage collection and all that. And there's lots and lots of these tools. There's hundreds of these different tools for the JVM. Any analysis tool that you've ever heard of for any other Ruby implementation, we have here and we love for you to try playing with them. But we recognize that these are also very much targeted towards Java applications. They give you information at the JVM level or at the Java object level. And so Tom's done some work to write a more Ruby-centric version of this. I'm going to show you a slightly eye candy tool that's shipped with OpenJDK called Jhat. Once you generate a Java memory dump, which is very easy to do, you can load it up in Jhat and it starts the simple web app that looks like it was made in, I think, probably 1990. 92. I'm going to go back to 92. It's like a go for interface. But if you look at these names on here, they're quite obviously not Ruby. They're Java classes. So let's pretend that we're trying to figure out a memory leak. And we think this one Java string might be it. And we can figure out that that's an instance of a Ruby string. What does a string contain? Well, there's a value field. It's a byte list. That looks pretty promising. So let's burrow down, look at the byte list. Okay. Bites. Well, that sounds like what's in a string. There are bytes in there, sure. Yeah. Oh, but what the hell is left square bracket B if you're a Ruby programmer? So this isn't too friendly. But I would say this is even less friendly. So of course, the JVM wants to work with characters. And if it's not characters, it's just garbage. And it will show you garbage in the tool. That's obviously not helpful for a Rubyist because who knows what that is. It totally makes sense that it would display it this way because even it might not be a string at all. And if it is a string, it doesn't have any encoding information. It doesn't know how to represent it. So if this isn't really what we want Ruby programmers to be looking at when they have memory leaks. So sort of an on the side sort of project at this point, not too much time spent, has been translating a Java heap from a heap dump and creating a JSON document that represents a Ruby heap dump. And then a companion, maybe an embarrassing companion project, Alien as a viewer, which is a Rails app. I'm not a Rails programmer. We'll go and view this. So you run your JRuby program, you dump the Java heap through one of the various tools, and then you run Alien as on it and generate your JSON. And you can see that we boiled 600,000 objects down to 11,000 Ruby objects. So we're getting rid of a lot of stuff that you're probably not interested in seeing. Here's the JSON, get a class name, the instances, tons of IDs, it's a scary graph of objects. If we actually look at this, an Alien as viewer, we can look at individual instances, we get the ID, we get the instance variables that are listed within it. If they're small enough to be displayed in line, we just display them, but there's always a link, so you can actually go to that formal object definition. You can see what's referring to it. It seems like a much better way to go and sniff around and figure out where your leak is. There's also an additional view which just lists a number of instances per class, but this is more of just a proof of concept really. What I would like is for someone to step up and start working on this, especially on the MRI side, there's nothing that's in the JSON dump that couldn't be provided on the MRI side as well. Now when you share tools, we'd really like to be able to share analysis tools and client tools, come up with some common things because it's Ruby objects in big graph and memory, right? We should be able to represent some of that, or maybe all of that as a common format. And we know there's extra information that MRI might be interested in that we don't have, so we just need to make it a little extensible so that we can add those personalities to it. And we really need this as a community because, I don't know, I don't think our tooling is so great in the Ruby space for debugging memory dumps. All right, so we'll talk a little bit, some of the to-dos we have. Like I mentioned, the JIT is about 95%, maybe a little higher. I'm going to finish off the last pieces of that. Along with that, a little bit heavier piece of work is to get AOT, ahead-of-time compilation, on the command line working again since we had to completely rip out the old JIT. That means that the old AOT kind of went with it. So that'll come along before the final release, for sure. We're going to continue to try and improve the compile time and interpretation time of IR, which we'll feed directly into startup. Ideally, we can reduce that down, make it as good or better as what we had before, ideally better. Tom has been working on refinements, didn't quite get it done for this week, but it shouldn't be too difficult. Then I'll take it to the JIT and see how I can optimize it and see if we can make it as fast as possible. We really want help with folks to come in here and look at the MRI tests that we don't pass. I'll talk about that a little bit later, how you can actually get involved. It's really easy to find something to work on. Just look at our list of gaps and start working. And are there any developers on Windows here? Yeah, that's kind of the way it works with Ruby. JRuby works great on Windows, up through 1.7. We have not had a lot of time to test and update JRuby 9000 for Windows support. So if anybody's on Windows and would like to help us make sure JRuby is as good on Windows as it is everywhere else, it's a great place for you to help too. So let's talk about what's involved in doing that. First of all, we're here all week. We're actually staying till Friday. So if you try it out, if you want to test out something on JRuby or play with it, we're going to be around. We're happy to sit down and work through your issues or help you optimize, help you get stuff running. We want to, like Tom said, we wanted to release a preview this week. But we feel like it's better to get people trying it out before we do any sort of release. Fix as many things, get it working for more people. So you can also give it, you can give it a shot today or tomorrow and come find us. The different ways you can try it out, Travis is an easy one. You can add JRuby head to your Travis build if you've got a library or an application that tests on Travis. Easy to add it. If it doesn't work, throw it into expected fails for now and let us know. We want to try and get all the libraries that are out there in the Ruby world running against our head builds. And we also have these head builds set up so that every time our JRuby master branch has a green build, it actually updates in Travis. So your next build will pick up the absolute most current version of JRuby 9000. You can also just pull down a snapshot build yourself. All of the Ruby installers handle this. In RBM it's JRuby 9000.dev. It may be without the dots for now if you don't update. I just did a patch this morning to change it to the official version number which is 9000 with dots in it. JRuby head is what you want for RBM. Be warned that it will do a full install so it takes a little bit longer to get it going but then you've got actually full JRuby head and you can keep it up to date. And then we do push snapshots on a nightly basis. Here's the URL where you can go and download either the source or the binary distribution. Now if you find an issue or you just want to help get involved with JRuby, it's really easy to set up for development too. Clone the repository, switch to it. Probably want to put bin and path so that you've got JRuby and R commands as the primary ones you're gonna use. MVN, run maven-pbootstrap, go get a cup of coffee while it downloads everything that it needs but then you will have a working build with just the one command. Very simple, it runs in place. There's no install phase and you'll be able to start trying stuff out and iteratively do some work on JRuby. If you do get involved, these are probably the most important paths within our repository. There's lots of stuff in there. But the basic core code, the standard library which is mostly the same as MRI. We have a few light changes to it but the build actually copies it in and gets it ready to go. And then our tests. We run several test suites but the two most important ones are MRI's test suite which obviously has all the coverage for new features. Under there there's an exclude directory. There's basically just a bunch of files that say testarray.rb and then a list of test methods that are excluded, things that we don't pass yet. It's very easy to say grab teststring.rb, pick a couple things out, see why it doesn't work and then help us figure out how to fix it. Same thing with Ruby spec, we have some stuff tagged off. Really easy to go into the tag files and look for something that you can contribute to JRuby. Everything in there we should be able to run so help us get there. And then finally a word about the dev process here. We're just all part of the same GitHub family as far as we're concerned. So if you wanna contribute to JRuby, just follow the standard GitHub process, fork it off, create a branch and send us pull requests and we'll basically just pair with you on getting that change in. We've had dozens of PRs in the last week or so for new functionality, fixing bugs. It's really the best way for us to work with this. Our great privilege on the core team is that we have to deal with putting releases out but we depend on all of you to help us get the release ready. And we want it to be a fun process. We want it to be a fun process. If you submit a patch and it maybe needs a little change we're not gonna yell at you. Hopefully people that have contributed before will attest that we're very open to new contributors regardless of your skill level or your experience with Ruby, Java, C, anything. Just give it a shot. We want to try and educate you on being a better contributor and we want to work with you to improve JRuby. We care. We care. We have a heart. It's the implementation with a heart. And this is really the payoff for all this. Nine years ago I came up to, I came down to San Diego and presented JRuby for the first time. Many years before that Tom started actually working on JRuby and this is what a decade of work on JRuby has gotten us. These are all JRuby users almost all of these just replied to a tweet on the JRuby account in about 12 hours and they would like, we love JRuby, we want people to know we're using JRuby. And there's obviously some big folks on here. BBC News, Visa, Comcast, Square runs a massive part of their infrastructure on top of JRuby. So this is what you're helping to do and this is what we've managed to do for the past 10 years. We've enabled all these companies to have a better option for building applications and whatever they were using before. All of these companies, some of these companies wouldn't exist if we didn't do JRuby, honestly. So you can join the family, jump in, help out with this stuff. It's really a lot of fun and we're looking forward to working with you. And then one more thing, beer tour. How many people drink beer? There's a few. All right, so if anybody's sticking around like through Thursday, Hero, one of our other contributors has arranged, where is Hero? There he is. He's over on the side. We've got a few slots left in our beer tour bus and we're just gonna basically take about six hours and go around to a bunch of San Diego breweries. You wanna try something out before then, we can talk about it on the road. Work on some bugs over a beer. Check this out. There's probably only about eight slots left. So if you wanna jump in and go on the JRuby beer tour with other people that aren't JRuby related. JRuby and beer. JRuby and beer, what could be better than that. So yeah, jump in on that and hopefully it'll be fun. And that's about all we have. It looks like we have about five minutes for questions or so. Thank you.