 So, it's time to start, right? Okay, thanks for coming today. I'm Tom Ennebo. I'm Charles Nutter. And we're the JRuby guys. We've been working on JRuby combined for like 12 years. Over 30 years, right? Yeah. Well, combined, yeah, working together for 30 years. Yeah. And Red Hat is gracious enough to employ us to work on JRuby. Which leads to a question we've been getting a lot this week. Yes, Red Hat has been acquired by IBM. But we don't really know anything. We have no idea what this means. And if we did know something, we wouldn't be telling you right now either. But it sounds like the deal is going to take a long time to happen. So it'll be fun to see what happens. Or I hope it's fun. All right, so we're going to talk a little bit about JRuby. How many folks know JRuby? Have you used it a little bit? So most folks are fairly familiar with it. We always emphasize that JRuby is really just a Ruby implementation. That's our goal, is to make it as similar, as easy, and friendly as it is to use regular C Ruby. We want it to work exactly the same. Pretty much all pure Ruby gems should just work. Modulo little implementation bugs that we continue to fix over the years. But generally, anything that's pure Ruby, if it doesn't work right, just assume it's our problem. Let us know, and we'll fix any issues. But they should be good. It should be solid. There are a lot of C extensions out there. We have support for most of the big ones, like NokaGiri, JSON, stuff like that. We'll talk a little bit more about compatibility and C extensions later. But even though we are a Ruby implementation first and we want to be the best Ruby we possibly can be, we also are a JVM language, which means we have full access to the whole power of the JVM platform and all of the capabilities there. For example, JVM tooling. This is a view of a tool part of OpenJDK called VisualVM. I think it's been replaced by Mission Control now, a similar app. But this is a live view of the JVM's garbage collector, showing the different generations in the garbage collector filling up, getting collected, how much CPU time is being spent collecting and dealing with each generation. Over on the left, you can see a live view of how much CPU overall, how big the heap is, and the nice sawtooth there showing that it's being filled and cleaned up, just like you'd expect with a GC. Of course, JRuby runs on the JVM, which has real native parallel threads. So a single JRuby instance can saturate all cores in your system. You can take JRuby on an 8, 16, 32, whatever way system, run one process, and that's the entire site. You can run the entire application rather than having multiple processes as you need in CRuby. One JRuby could do the whole thing. We also have access to a lot of the fun stuff that's on the Java platform. This is a video of a Minecraft plugin using the plug-in Ruby API based on JRuby. Here, Tom's little script is modifying the number of chickens that hatch out of an egg from what is it normally, one maybe tune? Yeah, sometimes it's more than one, which is weird. To 120, and Tom always tells the story that he was playing around with this plug-in and basically destroyed one of his Minecraft worlds by creating way too many chickens and just couldn't get rid of them. I tried to solve it by actually changing the chicken to wolves, because wolves eat chickens, and that didn't work. Okay, so we have two versions of JRuby that we support right now. We have 9.1, which is the 2.3 support, and we have 9.2, which is 2.5. We, in fact, skipped 2.4 support. Whoa. Another slide that we didn't update. Okay, well, anyways. So, we're planning on retiring 2.3 support. We'll do at least one more point release if anyone's concerned with this. Come and talk to us afterwards, because we want to find out how stuck you are on 2.3, or if there's something we're not doing on 9.2 that we should be doing. Are there a lot of folks still running Ruby 2.4 or lower? Show of hands, yeah, a few folks. This is so great that it's 2.4. Okay. Okay. There's not a whole lot in 2.4 or 2.5 that really break 2.3 apps. I mean, your migration, was it fairly painless going to 2.4 from 2.3? Sure. Sure? Okay. All right, so it's not too bad, but yeah, if anyone has concerns about this, let us know. We're probably not going to spend a lot of time on our 2.3 branch in the future. Yeah, and so the motivation for retiring it is that this Christmas 2.6 will come out, and we don't want to support three different versions of JRuby. So between 9.2.0 and 9.2.1, that was a five-month period. We're very sorry. And then in the last seven days, we put out four point releases. We made up for lost time. We actually, it would be like 30 minutes after each release, we'd get this report, and they're like, oh crap, we have to put out another release. But we're not going to keep putting out four releases a week. We should slow down to about once a month, or a month and a half, and try to stay true to that. Okay, so how do we continue going forward? Well, there's always lots of new features in every Ruby version. This is a great opportunity for anyone here to contribute. We do have a portion of JRuby that's implemented in Ruby. So if you see a new Ruby feature in addition to string or array or something, and you see Ruby code that can implement it, or you know how to implement it in Ruby, we certainly accept the patch. Feel free to submit it as Ruby code. Generally, that will be fine. Sometimes there's performance critical code that we might port into native, into Java or something. But generally, Ruby implementation of a new feature is fine. And this is really how we keep up with Ruby development. We've had tons of folks that have come in and helped us with individual one-off features. We work on some of the big features ourselves too, but we couldn't do it without community. We have an IRC channel on FreeNode, but we're also hosting a Gitter JRuby JRuby room, and of course on Twitter, email, whatever, just contact us, however you feel, and let us know what we can do to help you. So library compatibility. I mentioned that pure Ruby libraries run well. So Rails, Rake, all that stuff, those work great. They work just like they do on regular CRuby. And like I mentioned, if you have a pure Ruby library that doesn't work like you expect, don't assume it's your fault. Let us know, and we'll see if there's something we need to fix in JRuby. So I mentioned that we have support for lots of native extensions. Tom's gonna talk a little bit about some work that we're doing to support a new one, and that's kind of how we keep up with the C extension side. We don't support regular Ruby C extensions, but we're continuing to work on porting more of those over and working with the community. Yeah, so let's talk about OJ. It's optimized JSON. We've gotten a number of requests over the years to support this. In particular, it's a pain point for developers because if there's a direct dependency to OJ, they can always switch to JSON, but if they're using OJ's APIs, then they have to change a little bit of code. But even more problematic is people are increasingly putting OJ as a transitive dependency, and that makes it much more difficult to go and try JRuby out because you don't support it. And it's a native C extension, as Charles said. And the reason why we haven't done it is it's a pretty big one. Almost 20,000 lines of C. You wouldn't expect that in a JSON library, but part of the reason is that there's seven different modes where there's a string and a stream-based parser for each one and the ability to dump, and then there's a compatibility layer called Mimic that allows you to emulate the JSON gem. So our current, ooh, whoa, don't give it away too soon. So the Java side now is nearly complete and we're about half the lines of code. And this is mostly just due to the fact that Java has a better ability to share code. So Mimic hasn't been implemented. The WAB is the last mode that we're working on. And you can see that we're down to 50-ish bugs. So it's not gonna be too much longer. So let's look at some numbers. I went and got, I read this URL at the bottom and they have three different size JSON payloads and they kind of mix up the data that would be representative of what you would normally work with with JSON. I took their word for it. But the three numbers in each section represent JRuby running with the regular JSON gem and then the OJ port. And you can see there, it's about three times faster. So if you're already a JRuby user, this should help a lot. And you can even see that we're maybe two-ish to three-ish in cases against OJ on the CRuby side. So this is pleasantly surprising. When we go to the dump side of things, again, you can see that the JSON gem on JRuby just doesn't perform as well as the OJ version. And the MRI side of things, we're doing a little bit better, but not as much. This is one of the reasons that we're not really concerned about not having C extension support. We've tried to support it in the past, but compatibility-wise, it's just a really massive, difficult API to support. But here, whenever we take a C extension and port it into maybe Java, sometimes even Ruby, there's a red-black library that mimics a native extension, we often can end up being faster than the C version and less code. So usually there's an initial effort to get extensions ported over to JRuby, but then it's pretty solid from then point on. And most of the common ones have been done already. So the only thing we have to do is fix those final bugs and then submit a pretty epic PR. But we're committed to this. And the other thing to point out is there's been pretty much zero performance tuning. So there's a few things that we try to do a very literal port from C and now we can start leveraging some of the things that we know Java can do better. So a new section, we've been running Rails since 2006 and there's been plenty of people that have deployed JRuby on Rails applications. But about the time that we were working on Ruby 2.5 support, which was originally gonna be 2.4 support, we kind of fell behind a little bit. In particular, like most of the Rails stuff just worked as long as you didn't use a database, which is not very useful. So ActiveRecord JDBC fell a bit behind. But that's been solved now for probably a year or close to a year. And here's the results of us just running against Rails 5. Unfortunately, you see a few things that aren't working. This again, it was us falling a little bit behind and not engaging Rails core as much as we should. In fact, in Rails 5.1, we didn't have any errors in action cable or Rails ties. So this was just changes to the testing environment. But when we go and see fork present in tests, we get sad. We don't support fork and we'll never support fork. And those numbers are like 99.9% of Rails tests are passing and most of the other ones end up looking like this. Yeah, so this also makes us unhappy, but this is our problem and we'll do our best to fix that. If anyone is concerned about time precision down to like the 10 millionths place, let us know. So the tests are running well. In general, the takeaway here should be that you can just run Rails with JRuby right now and you shouldn't experience any problems. Obviously, if you do report them. One of the big outcomes of our Rails 5 work though is that we're only gonna support the big three database adapters. However, we've gotten enough community support that the MS SQL stuff is coming along. So we will probably be supporting that pretty soon. For Oracle and DB2, we request that you just go and use a third party gem like Oracle Enhanced. All right, so let's talk a little bit about performance. That's the main subject here today. So JRuby has what's called a mixed mode architecture. We have our own interpreter, we have our own compiler, the code gets parsed in, compiled into our internal representation, our Ruby instructions essentially. We interpret that for a little while. When it looks like certain methods are hot, are being used a lot, then we toss it into JVM bytecode and we toss it over to the JVM. Let the JVM chew on it for a while and the JVM is also a mixed mode runtime. So it runs the bytecode through its interpreter for a while and then eventually it uses one of its various JIT compilers to turn it into native code. And so in this way, JRuby was actually the first native JIT for Ruby just by leaning on the JVM and letting it JVM bytecode down to native code. So I'm gonna show a couple micro benchmarks here and here's the disclaimers. These are very fun to show off a lot of times and good for us as implementers to see them improve. A lot of these benchmarks aren't particularly useful unless you're generating a lot of fractals for your business. We have won these micro benchmarks for years and so we don't put a lot of stock in them necessarily comparing with other implementations. It's more for us to see improvement over time. But we will also have some Rails benchmarks a little bit later that are a bit more practical. So I wanted to mention one of the big features on the JVM that's been helping us since Java 7 timeframe or so about six, seven years ago is a feature called invokeDynamic. This is a bytecode at the JVM level that basically that's the JVM see our dynamic Ruby calls and constants and instance variables and everything else that's dynamic in Ruby as though it were static, as though it were written as Java code and it just inlines, optimizes and JITs like we'd expect it to. This has been improving over time. There's a little bit of a startup hit for the extra dynamicity of it. But performances have been improving over time. Startups have been improving over time. It is still only enabled in JRuby via a flag partially because of the startup time hit that you get from it. But if you flip this flag on and you don't see things get much faster, let me know because it should help quite a bit. So the first useless micro benchmark here is a Mandelbrow generator. It actually generates, I'm not sure if this one generates texture and actual graphic, but it generates a Mandelbrow fractal and then we just see how fast we can do that. So it's mostly a test of numeric performance. It's heavy floating point operations, some manager math as well. And this is another place where we lean on the JVM and newer JIT compilers like Grawl to help us optimize stuff. Here is the code. You've probably seen a variation of this, but I mean it's just a lot of math. A lot of loops and math and we, the big problem for us is that every numeric object we create, a float or an integer, is actually an object on the JVM. And so, another Mac List is gonna create millions upon millions of floating point objects in JRuby, whereas in CRuby they have various native tricks that they can do to get rid of those. So we're leaning on the JVM to get rid of that extra overhead, try and sweep those objects out of the way. So let's see what this looks like. So here is a comparison of JRuby with and without invoke dynamic compared to CRuby with and without the new JIT. And CRuby, oddly enough, the JIT does not help this particular case very much. It may be due to, if you saw Coco Boone's talk yesterday, not being able to escape away some of the extra math logic, might be the floating point math. I'm not exactly sure why. I'm pretty sure it's the floating point. It's the floating point stuff, yeah. So for this particular case, the CRuby JIT does not appear to help right now. But again, this is early days for the JIT. I wouldn't take this as how it's going to be in the future. JRuby comes in comfortably faster than CRuby just with the standard mode of execution. But then of course invoke dynamic really helps us. This gives us a big boost, gives us a two or three times improvement over CRuby. But we can go further than this. I mentioned that we are very dependent on the JVM and especially the JITs. And there's a new JVM JIT, a just-in-time compiler called Gral. It's written entirely in Java. It's been moving forward very quickly. But more important to us is that it has some advanced level of optimizations that the standard JVM JIT, the hotspot JIT doesn't normally have. This is available in standard JVMs. You can flip the flags on. If you're interested in trying it out, you can let us know. We have a wiki page we can point you to. But let's see how this changes things. So here we're just looking at JRuby. Here's our original JRuby result, about three seconds for this benchmark. Here is JRuby running invoke dynamic. And then because Gral has certain optimizations that can get rid, wipe away all of those extra objects that are unnecessary, we get numbers like this. So now we're getting into like 20 times CRuby range for performance. Your mileage may vary. Your mileage may vary. And again, this is a small benchmark. It's self-contained in one method, very math heavy. But on that sort of application or that sort of Ruby code, we are doing very well with the newer JVM JIT compilers. So another area that we've been working on improving JRuby is reducing memory, reducing memory size, reducing the indirection of objects. And that means trying to find better ways to dynamically shape the objects that we create. So Ruby instance variables are dynamic. They come in as they're assigned and we need to have space allocated for them to put them in the object. It really looks like a glorified hash, right? It's just a name, you put some value in it, you can put another name and another value in it and it always just takes on those values no problem. But if we were to actually implement this as a hash, of course it would be tremendously inefficient to have a full hash table for every object floating around the system. Plus the accessing of a hash is considerably slower than just going after like a native field. So we actually are doing optimizations that turn Ruby instance variables into JVM object fields. And then whenever you're accessing one of those instance variables, rather than going through the lookup process and accessing like a hash, you can go straight into a memory location, get that value out. This also reduces the memory overhead I'll show in a little bit. We're doing something similar for arrays. Objects now will all specialize however many instance variables are seen at first allocation time. Arrays, we're doing this by hand just for one and two element arrays. We just have a custom version of array that will have one field or two fields reduces that overhead of having a separate array object. And this is actually working out pretty well. We want to hook these two things together though. Obviously they're basically doing the same thing. They're creating a specific size object with specific fields for all of these values. We should be able to use a single code generator to do all of this optimization and make sure arrays and hashes and objects and structs all are nicely packed into memory. So how well does this help? So this is a single variable object. So you've assigned one instance variable. This is the memory reduction. On the left we have pre JRuby 9.2 or so about JRuby 9.1.17 before we did this work. And you can see that the object itself is about 400 bytes of memory. And then there's this object array, which is 320 bytes. Not only 64 of those are actually reference, bits, sorry, 400 bits of size. So there's 320 bits, 64 of those bits are the reference to the one object that it's holding. The rest is header information, array size, JVM, control flags, and all that kind of stuff. So that's what we're able to get rid of. We move that object into the, we move that reference into the object, get rid of this intermediate array, we have a significant reduction in memory. On a Rails benchmark that we're gonna show a little bit later, a select benchmark, pretty much all the objects in the system are actually getting specialized to these packed versions. So you can see each of these is generated. It just says, like Ruby object 25, somewhere in the system there are a whole bunch of objects with 25 instance variables. But it is right sizing them, putting the fields in there directly and saving that extra access. Like with the objects, here's the array version. So here's a one element array. We get rid of that extra boxing of the array stuff. We move that into the object itself, reduce the memory for every array. And you can imagine in Ruby there are tons and tons of single and double element arrays. To the point that just this simple by hand optimization, we're basically catching almost half of the arrays that are created during that same Rails benchmark. We can probably bump this up a little bit, but beyond like one, two, three elements, usually you're gonna be changing the size of it, you're gonna be adding things to it, you're gonna be a lot more mutable. If we keep these small ones hand optimized, it saves us for about half the arrays out there. We've reduced that memory load. All right, something a little bit more practical. JRuby on Rails performance. So we're gonna look at two aspects of JRuby in Rails performance. Active record, which obviously is the key thing for performance of most Rails apps. Rails apps live and die by Active Record. If you're using Active Record, it's probably the main performance problem in your application, the main consumer of CPU. So it's very heavy on the CPU, it has a lot of objects that it's creating, it's a lot of GC overhead. We're gonna show just a few benchmarks, create, read, and update. If delete is your bottleneck, you can let us know, but these benchmarks I think were pretty representative of our Active Record performance. All right, so here's create. This is the one area where JRuby seems to be a little faster. It's probably due to fewer objects being created during the construction process of creating those initial records. But after this it all starts to look quite a bit better. So this is higher is better, this is fine operations per second. So here JRuby with invoke dynamic, comfortably faster than CRuby. I mentioned Grawl and we said your results might vary. This is one of those examples. It's very cool, very new technology, very interesting performance, but on a lot of things we run, it's still not quite there with the standard JVM Jits. So we're still playing with that. But in both cases, comfortably faster running Active Record operations compared to CRuby. Same thing with selects, again, comfortably faster invoke dynamic, really giving us a big boost here. Fines, also faster, looking pretty good. Grawl helps here for whatever reason, it was a little bit faster. That drops back down for update. Update is interesting because so few objects are being created, our performance really starts to shine. It doesn't become a memory bottleneck as much as how fast we can push stuff into the database. And here's where JRuby really starts to look good against CRuby. So if you can get Active Record to run well, then the next problem is just scaling out the entire application. This is a really a classic, difficult problem on CRuby. In fact, whole companies have been founded just to handle this scaling problem. It may help you scale out Rails applications. So on MRI we don't have real parallel threads. The only way you can really get two concurrent requests to be running on the processor is to use two separate processes. We have lots of nice tricks. There's some cool features like the copy on write stuff that CRuby does, so all these forked processes don't use as, they can share more. But inevitably, having multiple processes, you're duplicating some amount of runtime state somewhere. Application loads data in late. Application makes changes to what's in memory and you've now got two copies of it that you have to deal with. So this is a lot of effort to maintain all these instances. This is a lot of resource wasting, too. Each one of these VMs has to have its own garbage collector. It has to have its own optimizing. It does it all on its own and you don't get the benefits of that one shared heap, that one shared memory space. This is where JRuby really starts to shine. We can have a single process, multi-threads, run the entire site. And you can get rid of all of those processes and probably run faster on an individual request basis too. So how are we measuring this? This is not a terribly scientific one. It was just done on an EC2 instance. I ran the benchmark and the application and the database all on the same system. But it was a fairly large instance. It was a C4 extra large, so that's equivalent of four CPUs or so. I didn't use anywhere near that amount of memory for this benchmark, but it was there. So we warmed up for a little while. I will show what the warmup curve looks like so you can kind of get an idea. JRuby takes a little while to get going and then measuring every 10,000 requests. So after we let this warmup for a little while, here's where we come out for a full stack scaffolded application on JRuby. Again, about 30% faster than CRuby here and we'd expect this depending on what your load is. If you've got a lot more Ruby code or you've got some really CPU calculation intensive parts of your application, this will increase. Some of the overhead here is obviously just going back and forth to the database. But we're pretty happy with this result. Full disclosure, this is what the warmup curve looks like these days for JRuby application. So you can see CRuby and the JIT they don't really have a lot of warmup at this point. The JIT, when it does kick in, it kicks in really early so you don't see much of a change over time like this. But it takes us 20 to 30K requests on this particular app to really get up to full speed. So folks will sometimes have warmup scripts if they're deploying some of their production they'll hit it a few times, get everything booted and cached and that's not too unusual for large applications. And I did not get to the graph for this so I will do that later. You can imagine a graph here. So again in the effort to fully disclose things here JRuby does use more memory. JVM because it has a lot more internals, it has a generational garbage collector, it has a lot of its own memory space. It's gonna be 400 to 500 megabytes for a single Rails instance depending on how big it is. You can choke that down to like 200, 250 if you want but the GC will run better with the extra space. And so obviously if your site is a single CRuby instance you're probably not gonna see a whole lot of gain for moving to JRuby, you probably don't need JRuby. But when you get to the point of having 10 CRuby processes or 20 CRuby processes now it starts to make a whole lot of sense. Memory on any cloud environment, any shared environment is extremely expensive these days. That's what you're renting essentially. And 10 JRuby threads can do everything that 10 CRuby processes can probably more with about the same amount of memory use. And then if it goes up from there, site continues to scale, you can do the math yourself and see that JRuby really will end up using far less memory than a large CRuby Rails application. And we're kind of excited about this. This is probably the first year we've really been able to say this. In every way that we've been able to measure, JRuby is the fastest way to run Rails applications. And if the numbers that we have here hold out 20 to 30% faster in a lot of cases, database access and CPU intensive code can be maybe five to 10 times faster. So really excited that JRuby is the fastest way to run Rails applications these days. All right, so we're gonna shift gears. We're gonna talk about something that we're working on right now, which is method inlining. But before I talk about that, I thought I would describe what method inlining is for people. So in this little code snippet on the lower left, calculate cost, we're calling this method add a couple places. And we realize that the add method that it's calling is always exactly the same. So it becomes a candidate for inlining. When you inline a method, you just take the body of that method and replace the call with it and substitute the parameters. So it just becomes some simple math here. The body's there. So one of the obvious things about method inlining as a benefit is it gets, it eliminates the overhead of making the call in the first place. You're not deepening a stack, pushing values, and so forth. But the less obvious and probably more important benefit is once you bring that body back to the method that inlined it, you get additional information that you can do continual optimizations with because you've learned more. So here is the post inlined version from the example. At this point, we might look at it and go, why do we have any variables? So then you just have a bunch of primitive math and you're like, well, why am I doing all that math? Then you have a single value and then this can just keep going on. That could get inlined into something and you can dramatically eliminate how much work you're doing. So method inlining is very important for performance. And in fact, Java's fantastic at inlining methods. This is probably the biggest reason why the OJ extension is so much faster because it all inlines based on how it's actually used and optimized. But there's always an asterisk. The one case that it doesn't help us is when we make a method call and pass a block, that case does not inline for us. So this is a tough slide. On the left side, there's a couple of places where we're calling times. This ends up being hot code. It ends up being two candidates where we wanna inline. So if you imagine the substitution example here, you might take something and replace the yield there and then you would inline that back to that first call. But that doesn't actually happen. What happens is because there's two blocks being passed into the time method, it sees that that yield is interacting with two types or two different versions. And it just says, nope, not gonna do it. So we've had this problem for the entirety of JRuby. It's interesting to note the JVM itself has this problem. JVM lambdas, which are their blocks, actually will not inline through these cases either. So we're fixing stuff the JVM doesn't do for us. If only they would have supported blocks to begin with. So JRuby has its own method inliner. We just treat the method call and the block as one unit and at the call site or the place where the method is invoked, a call site knows how to invoke the method for you. It also keeps track of some information. Is it always the same type? Is it always the same method? Has it been called enough? And if it has, then we inline it. And so as I said, we treat both the method and the block as a single unit. First, the process is very simple on a slide and very difficult in practice. You duplicate that original method, then you inline the block into that copied method and then you inline that back to the call site. There's only one limitation. Both the thing you're gonna inline and the thing you're inlining into has to both be written in Ruby. So here's a contrived example, but as weird as it is, it represents a fairly common case. So Foo is calling inline me a lot and it's passing a block and in that method, it's yielding a lot. The wiles are just to get rid of having lots of extra blocks because we need to simplify it when we implement this stuff. But if we actually look at the difference with the inliner, we're almost five times faster in this case, but again, it's contrived, which is unfortunate because almost all of our core methods are implemented in Java, so we can't inline those. And of course, we're happy having Java implementations of both most of our methods because they start up very fast and they usually have pretty good performance, but they block us in this particular case if only there was a way around it. Well, in fact, there is, let's say for example, that integer.times method, we wanted to inline that. We go and look up internally to see whether we have a Ruby implementation. If we do, then we use that version of times to go and perform the inline. Excellent. I might not explain this slide due to time, but we decide to inline times. There's our special Ruby times implementation. We perform the inline and that version's on the right. And when we run this version, it's not as good, but it's still excellent. And it's not contrived in the sense that it's a real core method that you might commonly use. It even gets better. If we take the specialized knowledge we know at the call site, like if we were calling five dot times, we could maybe use a very specialized Ruby implementation that doesn't have any looping in it at all. It's an unrolled loop and we can get rid of bounds checking. So there's a lot of really, really fun potential here. And you can try it today. You can just do dash X, I or inliner and see if it works. Things have been, I've been running into less and less bugs with the larger and larger apps, but there's tons of constraints. So give it a try. And if you do get a failure and it's not a big program, I love to see those cases. The logging output's just massive. All right, so we'll wrap up here. Of course, if you're using a Ruby installer like RBN or RVM or whatever, you can just install JRuby, it'll pull the latest version and you'll have access to it. Otherwise, you can go to JRuby site, we've got tar balls, we've got installers for Windows. We try and always keep our Windows users happy. JRuby runs Rails and everything else very well on Windows. So if you have that as a deployment environment, definitely take a look at JRuby. If you are to pull down a current JVM or current JDK, it's probably gonna be a Java 9 or higher JVM. There's some stricter encapsulation here. So some of our tricks to convince the JVM that we need access to real native file descriptors and real native process IDs and whatnot, those things frustrate the JVM a bit. So you will see these warnings about us digging around inside JVM internals. We're slowly getting rid of these and dealing with them in the right way, but if you see them, don't worry about it. It doesn't affect anything in the application. We wanna thank all of our users around the world. This is a small sampling. A couple years ago, we just tweeted out a request for anybody using JRuby to send us a company logo. And then within a couple days, we got this much. Very exciting, a lot of these are really fun ones too. Like up there, NASA. I'm not sure if they're actually, they're associated with the Allen Telescope Array out here in California, but that's one of my favorite use cases of JRuby. The Allen Telescope Array used for the Search for Extraterrestrial Life Radio Telescope Array in Northern California. Of course, all the little bits and pieces are run by C++ or whatever, but the whole thing is orchestrated as a JRuby application. So JRuby is actually helping to search for extraterrestrial life. That's pretty cool. I'm not sure if we have them up here, but one of the scariest use cases of JRuby, the Oslo Airport, a friend of ours in Norway, wrote a terminal application, a GUI application, that is used for refueling all of the planes that come and go from Oslo International Airport. That is a JRuby application. So every time I fly into Oslo, I'm like, oh God, please, hope this thing works. But that's the kind of terrifying thing that's out there. So that's about all we have. Thanks very much. Here's contact information for us. And I think we are ready for questions. We've got about five, six minutes or so. Thank you.