 Okay, Tracy. Hi, everybody. So welcome to our Jenkins online meetup today on what can Eclipse Open J9 do for Jenkins. So I'm Tracy Miranda. And one thing, yeah, I really love open source. And the best thing is really when you get two different open source projects together and see what happens and see what magic can come out of that. So just before we get into the presentation and our guest speaker for today, I've got a short agenda here. So I'm going to go through a couple of announcements just while we give people a chance to join in. And then we will go into a presentation on Eclipse Open J9 by Steve Poole from the Open J9 team. And then following on from that, we're going to have a Q&A and a bit of an open discussion. And I'll talk more about that format. So just a few announcements first. And in particular, our events we have coming up where you can see more of the Jenkins community in person and learn more about Jenkins. So first up, we've got the DevOps World Jenkins World conferences, one in San Francisco in September and one in October, which will be in Nice in France. So the first one in Europe, especially for the open source community. We've got a discount code or if you want to try winning a free pass, there's a contest there that closes today, Thursday. So that's two ways you can join us at that conference. Before those specific conferences, we also have contributor summits. And just to be clear, these summits are free to attend. And they're sort of a big gathering of various Jenkins contributors, new and old. So we'd love to welcome anyone who's thinking about becoming a contributor. And for contributors, we don't just mean cold contributors. We'd love to welcome any folks who get involved in running meetups or in documentation or any way you feel part of the Jenkins community to come along to this. And you'll see a link to the meetup at the bottom, which shows you where you can go register. So the events are free, but you do have to register. So we know you're coming. And one other event worth mentioning is the day of Jenkins as code, which is in October in Copenhagen. And that's just before Jenkins will in Nice. So I know some folks in the community are going to go along to both of these back to back. So if you can join us at one of these events, that would be really great. Now, the next set of announcements I just wanted to highlight to folks who are joining us is in the Jenkins community, we now have special interest groups. And these are areas where folks can get together around specific topics. So far, we've got about four groups running, Chinese localization, Cloud Native SIG, a Google Summer of Code SIG, and the Platform SIG. So if you check this out on Jenkins IEO, SIG's page. And in particular, I wanted to highlight the Platform SIG because this is a special interest group where we're going to use as a venue for all kinds of platform support discussions. So that could involve versions of Java, operating systems, docker packaging, web containers, just all elements of that. And actually, the meeting we're having today falls under that category. So while it's a Jenkins online meetup, we're going to run half of it in the style of a Platform SIG. So what this means is that you can join us in the Gitter Chat if you want to ask questions or get involved in some of the discussions. And we also have a live document where we'll be capturing questions and just anything that gets discussed. So if one of my colleagues could go ahead and stick that in the Gitter Chat and if folks want to comment on the document or even see the progress there or refer to it later for some of the discussions, please go ahead and do that. And in general, the Platform SIG meets monthly. And we'd love to welcome anybody who's interested in driving the future direction of Jenkins and in relation to the platform it gets involved in, which brings me nicely on to our talk for today. So I'm going to stop sharing my screen and we're going to welcome Steve Poole for the main presentation now. Right, excellent. Okay, so can you hear me and see my screen? Just watching the light, yeah? Yep, that looks good. Okay, well, thank you for the invite. So I'm going to take you through quickly, 20 minutes or so, if I can keep to time, about OpenJ9 and I'll explain about it, a little bit where it comes from and obviously the big benefits that we see. And then obviously we can see how that fits in with Jenkins. Jenkins being something that I've had a soft spot for for a very long time. It's probably my favorite bit of open source Java code because it's just something you can use all the time. So it's really cool to be looking at joining these things together. So I work for IBM, I'm a developer advocate which means they go out and talk about Java. I've been doing Java and JVMs for a very long time and I've dev ops and things like that. So I have a big background in the JVM, the Java space. So one of the things that we do a lot when we're building this stuff is we use things like Jenkins and stuff like that and other build tools to create our own builds and the adopt team where OpenJ9 and hotspot and others can be downloaded from uses Jenkins as well. So it's quite a core relationship here. So the first thing is if you want OpenJ9 you can go get it from this website. Sorry, I'll talk to the website a second. So OpenJ9 was contributed by IBM late last year as you can see under dual licenses which are important, you get choice. And if you wanna participate, contribute to it and then there are various links there that you can go off and join in. So this particular talk is focused around cloud because that's a big driving factor. All the things I talk about which is about cloud performance isn't meant to suggest in any way that the JVM, OpenJ9 is not fit for other activities. This just happens to be the thing that's most relevant to us nowadays is cloud. So I'm gonna talk through like that. So you know as a Java developer the background of Java performance prior to cloud and we ended up with being driving Java to a particular performance shape and you've probably seen lots of benchmarks that look very similar to these sorts of shapes. And you end up with this sort of traditional performance profile that basically as you can see over time it takes time to reach maximum throughput and throughput and memory use sort of interchangeable because of the type relationship. And we know that many of these occasions you end up with this sort of time to get going and some big lag at some heap of the hump at the top that disappears and that's a traditional profile. But with cloud we've now got new things coming along. Cloud defines new profiles. With compute on tap which is the thing we're all looking for, it's finally arrived and we're now making full use of it and it has significant impact on the shape of the applications that we want to run. And one of the reasons that we have a big challenge is because compute on tap has given us a better understanding of the relationship between compute and money. So whereas before it was not completely obvious the relationship now with cloud where you buy things by the amount of RAM you buy or the amount of bandwidth or whatever it becomes really obvious when you want to buy some more compute power that you have to spend money. And so that relationship between compute and money is now staring everywhere in the face and obviously our accountants are interested in making sure that we get value for money. So from a developer's point of view the occasion looks like this. The big figure that we talk about on Amazon or IBM cloud or wherever is gigabytes per hour that's what you tend to buy, memory. So from developer's point of view you can see that actually it translates to this. So if you're a Java developer and you've set some heap size it costs some amount of money from now on if you increase the heap size for whatever reason then somebody's gonna say, hey that's gonna cost me another dollar or another $10 or whatever. And so people are scrutinizing that. Right, so let's take this traditional profile and change it slightly because this is what benchmarks look like. Real life looks like this or over time you have some peak of demand which you can't really predict. And the real question is how do you get your application to fit into that space? So the obvious thing if you were running this on premise is you just go buy a big server that deals with the largest amount of demand and you're done. So you just buy one and it solves everything. If you move that type of application to the cloud and you buy the equivalent about a cloud resource for the machine that you have in your data center then suddenly you realize that for lots of the time when this service isn't being used, isn't under peak you're still paying for compute power and your accountants, et cetera will be looking at it and going what can you do to reduce your cost because you're wasting money. And so obviously what we've done is we've looked at it and said how do you fit the line better? Well, you have smaller units, smaller compute units that live shorter lives to map the curve and nowadays it looks quite like that. And to be honest, here you can see just from this picture you can see the economic pressure of the drives while we're doing things like microservices to give us the scaling to fit that demand curve. So that's cool. It still doesn't quite come down to what do you want the JVM to do? So the demand on the JVM from these sorts of environments are smaller memory footprint because it costs money, smaller deployment sizes because I've got to shove things up to the cloud on a repeating basis. We know what about those. I need my application, my microservice to start really quickly because if it doesn't start really quickly then I'm not dealing with the workload, et cetera. And then by the way, when I'm not doing anything where my application is idle, really idle I don't want to be spending any money and I want to be doing what I can to reduce my costs. So if I put that differently, so let's put a different picture. So take this picture, let's go back to that group of, that original curve. If you've done any sort of profiling you know that you get these sorts of profiles. So even if we break the application to little microservices, you end up with a shape even for the microservice that looks like this where it takes time to reach maximum throughput and there's a bump. And from a cloud point of view that's a big challenge for us now because the longer it takes to get to maximum throughput is cost you money because some other service is picking up that workload that you had to start previously. And if you go over your some peak and then you come down again and you never go back up to that peak then you've just spent money on something you're not going to use again. And in circumstances where you are at the edge of say a compute tier, you might end up being pushed into a bigger tier just to get the thing started and after that you don't use that tier. So that's a challenge. So what we're really looking for is a throughput line that looks like that where it's very quick to start. It doesn't have peaks and troughs in terms of the VM or the application getting started. The workload is a different thing. So this means from a cloud point of view you end up with all of these obvious changes. We know that changing memory is going to cost memory money so we've got to reduce memory costs. You're going to be running in smaller RAM sizes because your accountant isn't going to pay for the bigger ones. And so you're going to be thinking very much more about where memory is being used. And of course, you also know from a microservice point of view that it multiplies really quickly because we're not talking about one JVM, we're talking loads. And if you're running in the cloud or you're in a cube cluster, you have the same sort of basic economics. And whatever cost you spend, whatever money you spend, somebody will be looking at you and going, can you make it cheaper? So all those were drivers that we were aware of and it's the sort of thing as users was we're looking at trying to adapt cloud, we know we need to do those sorts of things but there are more drivers and those drivers come from what the cloud provider needs and what the cluster provider needs and that might be more relevant to, so those of us who are running Jenkins servers, et cetera. The physical change is that though we talk about cloud having, we imagine cloud being lots of small bits of hardware it's actually going the other way because the hardware quality and the price points are turning us into situations where people buy large machines and divide them up into small units. So it's not lots of small machines, it's big machines with lots of memory and lots of CPUs. And that gives us a challenge because when you take a traditional application that where you can see the blue box is on the left where there was lots of opportunities between those applications sharing, just traditional Java sharing whether it's the fact that you're running the same app on two apps on the same JVM or two applications on the same app server there was just natural places for sharing. So that's cool, that works really well for people who have got a big machine but unfortunately we're from a software design point of view we're going the opposite way which is we're now going containerized and we're going for the separate microservices things where there are no sharing opportunities naturally in that architecture. So the providers are trying to find ways to solve this problem as well. How do I reduce my memory usage? Cause like that saves me on physical hardware. How do I get the machine so that it's, we might say that with an application is not busy whatever it's being used for whether it's, you know, whether it's CPU or memory how do I reduce its usage? Not that the application will notice but the provider will notice so it can free up those physical resources and use them for something else. And obviously that means if you can do that you can get more workload out of the same box. And as it says on number three we need to figure out how we get these things to start really quickly because there's optimizations now. So that's what we get to OpenJ9 and as I said, OpenJ9 is based on it's not just a cloud-based VM it's a long running VM and I'll talk about that. As I said, it was open sourced in September last year and then we contributed to Eclipse. IBM contributed to Eclipse because we think there are lots of things that need to change in the Java space and there's no good us trying to beat people up. We need to be contributing and sharing what we've got. We have some cool tech we wanna share it with people to make use of it and we wanna start building communities around well, where do we need Java in the cloud to go? What do we need to have happen in the VM space? So the best way to drive those things is to offer up what we have already and demonstrate that we can actually provide some of these capabilities. So OpenJ9 started on a small file. So as you remember these, that's partly where it was designed to start from. And that was really cool. That really makes a difference because we've gone from something that small which has a set of operational requirements. So if you've ever had a phone, you will know, if you've had a phone, if you've had Java running on your phone, you will know that you don't have much memory and you want your games and things to start up really fast because you're not gonna wait for things to start. You're just gonna go use a different game or whatever and you don't want your game to get better as it runs. You don't want to be in the situation where the graphics get better eventually. You need immediate throughput. You need immediate maximum throughput. And those characteristics are quite similar to what we just said about Java Cloud. So the footprint size, the startup size and the ramp up, right? These are all very similar. So when we look at OpenJ9, we can say actually it's got built into it, the characteristics from the beginning, right? And though it may have started on one of these small things, IBM has taken it and running it on the largest machines that you can get. So large mainframes, 32 terabytes of RAM, loads of CPUs, et cetera. So big machines. So this architecture runs and this code runs from small to big. And so it's got some, obviously I just said it goes from smallest to the largest. It's got its own specific garbage collectors. They're not the same as hotspot. They're very compatible. We even have a soft real-time one. And we have caching and sharing technologies which is gonna be the meat. So let me talk about how it works. So the big thing in terms of sharing is OpenJ9 sharing shared classes technologies. So that diagram where I've got these yellow box, yellow arrow is showing where we used to have sharing capabilities. Of course, when you move into clustering and into Docker containers, those shared opportunities disappear. But what we can do with shared classes is we can add that capability back and allow you to share state, share classes, share constant data between applications across all the spaces, not just between container to container, but also between containers and VMs running on the hardware and et cetera. It's really easy to get going. It's just got a one option. The second option of the screen you can see is the DAXX option to set the size of it. But the first one is that's what you do. You just turn it off. And the reason it works is because of the way that we deal with class files. And obviously sharing is about what you have in common. So what happens is we take the class file when we're loading up classes. It's from a VM designers point of view, it's not optimal for what we're trying to do. So it gets broken down as part of the loading into two parts. It gets broken down into this wrong part, wrong class where we work out all the stuff that your class file can share or the static stuff and others, things that we can figure out, you're not gonna mutate, stick that in the wrong class and then put everything else, the stateful thing in a separate class. So now we have built in a real separation between stateful and non-stateful. So that means when you've got situations like this where you've got three JVMs running on the same machine and maybe one's running a Docker container, you can see that you've got the two types of class now, the wrong classes and the data classes. In this environment, you can now share all the wrong stuff because it's non-stateful and it's accessible by all the VMs, right? And that just immediately reduces your footprint and it gives you a faster start-up time because we can already figure out the class that you wanna load in JVM one, JVM two has already loaded and it's been cached, right? And as I said here, you can share the wrong classes across the boundaries. It's not a container or a VM space. If you can get access to the file system in a common way, then you can share this data. And just doing this, just doing the radio update just gives us 20% footprint reduction, right? And start-up, well, start-up is obviously removal fees depending on your application, but it definitely improves start-up because simply loading things from, loading these pre-compiled classes makes a big difference. And we can share the JIT code as well. So we can provide this dynamic AOTs capability. So once the JIT is kicked in and has compiled the code and produced the optimized JIT code, then that can be cached too. So the next time that you start up a VM, then that code can be immediately loaded. So you don't even have to read JIT, right? And that gives you immediate start-up performances, right? Now, there's some wordage here about the differences between how you run AOT, but effectively if you do it right, then you get significant improvements in load types because we're loading up code that's already been optimized and has been structured to fit straight into a JVM. So there's none of the usual heavy lifting that takes place when you're doing a JIT. A couple more pages on some options. So there's a whole bunch of things you can do to tune OpenJ9 to get the throughput you want. There's a whole bunch of stuff there. I won't go through them in detail. And then something else, which again is useful if you're a provider. So I'm sorry for all the words on the page. When you're doing, when your application comes to the point where it has gone idle, if a JVM knows that's happening, that it tends to just stop because there's nothing to be done. And in general, that means that there's no GC, and your application just stops. And so nothing happens. And that means that if you've got garbage sitting in memory, it doesn't get collected. And that's a shame because if you collected the memory, then you could in theory give that memory, so that memory back to the OS, right? So there are capabilities in Hotspot and OpenJ9 to do things like this. With Hotspot, the developer has to figure out when it's idle and it has to do, the developer has to do the heavy lifting with J9, you turn on the right option, we figure it out for you. And part of the, there's a whole bunch of heuristics to work out with your application is idle. Okay, oh, I missed the page. So just finishing up on that. So if we can figure out when you're idle, then that means that we can do a GC and clear up garbage and give the memory back. And then at the point where your application goes away and becomes non-idle, then you've already been GC, then so you can kick straight on and not have to worry about running a GC as soon as the workload picks up again. So here are the results. Here's the first one, startup time. So we run a whole bunch of different sorts of benchmark and tools just to see what we get. Roughly 30% startup if you press the right buttons. Footprint is dramatically reduced. And you just got to turn on share classes and away you go. It just really is fantastic. But you get a really good, you get improvement just by using OpenJ9, but you turn on share classes and quick start, it's just great. And this complicated, there are two charts here which are supposed to explain to you what happens in terms of performance benefits from a real idle point of view. So the top one is with no idle detection turned on. So the top one is the blue process memory and the Java heap. And you can see on the bottom of the chart there's some active idle statements. And every time you're idle, you have an opportunity to give back some of the process memory. And the chart below, you can actually see that happening with it turned on and you can see the little blue cutouts. And that's where J9 has figured out that things are idle and it can actually do a GC and give back some of the memory. And as I said, the benefit from that, of course, is that you've had a GC happen ready for when you actually go non idle. Right, so going back to this diagram, more like this please, here's some real data. This is OpenJDK9 with hotspot. So you can see from a particular benchmark, but you'll get similar shapes depending on what benchmarks you run. You can see it took OpenJ9 and with hotspot some time to reach this maximum throughput. Here's OpenJDK9 with OpenJ9. So you can see no heaps, no bumps and significantly faster startup. All of the gap between the red line and the blue line, that space there is new work that has happened. So if your application is, everything below the red line is work that your hotspot application is doing. Now, if you imagine that that work could actually get done quicker because you're with J9 because you started up faster, right? And if you really want to go fast, if you turn on the AOT option in sort of a no optimization mode, which means basically no more JIT, you just use what was AOT the first time, you get very fast startup time, but you tend to pay that with not getting the throughput. Okay, so coming to the end, the point about, we want to say is, this JVM does all these things. I've been able to contribute to these clips because of the fact that we want to use this as the base for discussions for the future of Java. And if we don't put out our stuff and don't have those conversations, then around our technologies, then it looks peculiar. So contributing to Eclipse, I think it's been contributing code to open source for a very long time. So this is the right thing for us to do to get the technology out for people to try it. It's a technology that we've been using for a long time. It runs on all, no, you pick all the finance companies, you fix insurance companies, you, all the banks. If they, if they're an IBM customer, then they've been using OpenJ9 for years to run their business. So we're going to say it's a new animal. Okay, it's not the new animal because it's been around for a bit, but it's the new kid on the block. We think it's got some cool stuff. Okay, it's very easy to use. You can go download it from adopt. It's that easy. So if you go to adopt OpenJDK, it's ready for you. You can go and get an OpenJDK eight or nine or 10, maybe even 11, I'd have to check, and you can go get it with hotspot or you can go get it with OpenJ9 and you can try it. These things are all fully compliant, available for download, either as chips, jars, GZs, or even as Docker images. So you can go to Docker and pull down the same thing. That's really it, I think. I think there's one more page. Oh, yeah. So I'm touting it because I think it is a really good technology. We've always been very proud of it. And now that we've open sourced it, we're still proud and other people are trying it. And you can see some comments there from people. Go Google, you'll go see other people saying that they just did the switch and got some really good improvements. That's it. So thank you. I will stop sharing. All right, thank you very much, Steve. So just a reminder to everybody, if you do want to ask questions and you're not on the live hangout, please go to our Gitter channel, which is jenkinci slash platform hyphen sig. So you can ask questions there. So I'll open it up to discussion, but I wanted to just ask a few things first of all. So in terms of operating systems, which operating systems does OpenJ9 support? Well, basically everyone that IBM has supported. So all the Linux is, so it's basically Windows, Linux, AIX, ZOS, Mac, yeah, basically everything. Okay, and what about versions of Java? So which version does it? Eight, nine, 10, 11, et cetera. Yeah, so it's the full spectrum. Excellent. Okay, we've got a question here from Mark Waite. He says, are the license restrictions on using the OpenJDK J9 project any less restrictive than the Oracle licenses? For example, am I allowed to distribute a local copy of the OpenJDK J9 binaries to internal machines without downloading from the project? Yeah, absolutely. It's Eclipse license, it's Apache license, it's EPL. You download from adopt. Adopt doesn't have the same restrictions as Oracle. Okay, great. And another question, are they official Docker packages available for OpenJ9? As in the sense that they're on adopt. So we're working with the adopt team to provide the hosting. And as, I said it really quickly, but the top guys are hosting downloads. So they're building and making available OpenJDK plus with hotspot and OpenJDK with OpenJ9 as binary downloads, a traditional sort and as Docker images. I think I can show that. Let me see if I can go ahead and So this is adopt OpenJDK. Can you see that on my screen? So that's where you can get... Okay, so we've got the adopt OpenJDK 8, 9, 10. And so if I click on the GitHub recalls... Screen. Yeah, so you want the Docker images, the links on the page before, but at the top of your page you can see, yeah. Right, okay, so looking for Docker images, pull them from our repository on Docker Hub. Great. Yeah. Okay, so you can find those. And look down here, we've got our Jenkins CI. Great. Okay, so let me see if there's some more questions. Yeah, I have a question, but maybe about the history of OpenJ9. Yes, so in Jenkins project, we have a number of issues in IBM Java. So there are maybe 10 or 20 issues open to this platform specifically. The most of them are pretty old. And I wanted to ask whether you know about known and the real compatibility issues with the recent versions. So let's say, we clear that the code that you get from Eclipse, the Eclipse OpenJ9 code is exactly the same code that we use for any of the IBM binary builds. So the objective is that there's only one code base. So I believe there's a little tiny bit around the edges for Z that they haven't quite got out the door yet, but the code is exactly the same. So if you have compatibility issues and they're still true with the latest IBM Java, then we're gonna have the same issues with OpenJ9. So it would be worth revisiting or let me know and we'll go off and have a look. Yeah, I've pasted the link to the filter query in the chat. So yeah, there are about 10 open issues. But yeah, as I said, the most of these issues are pretty old. So most likely they are obsolete. So I just closed them as old. Well, we do it sometimes. Yeah. So the recent issue has been actually created two weeks ago and the issue was, does anyone know if Jenkins supports the IBM G9 plus TLS version 1.2? Because, yeah, there is a report of what, yeah, I'll fill it up this time. I'm, this is the first time I've seen this list. And obviously we would like to make sure that if there are challenges that we're aware of them and fix them. So I will go through them and if we still have, if there's anything in here we can do to address them or we know to address them, we can update the issues if that's all right. Yeah, right. That sounds great. Be good too. Yeah, just sure things have moved on. So I was actually going to say, I gave it a try. So I gave Open G9 a quick spin with Jenkins, nothing more dramatic than just starting it up. So I think for what I was looking at, which was just startup, there was no, I couldn't really sort of measure anything because it was all four or five seconds between hotspot and Open G9. But I did see what you were talking about in terms of memory differences. So just running Open G9 with Jenkins just off the war, startup, I could see that it was using 131 megabytes. And if I did the same thing with hotspot, just default no options, passed in, it would come in at 495 megabytes. And I did try to sort of set a JMX setting to just have it try and get the garbage collection to run more frequently. But the best I could get hotspot down to was sort of 250 megs. So yeah, definitely still kind of two out. So that was good to see. Yeah, so where is Jenkins in terms of support for things like Java 9 and onwards? So there is an ongoing project for that. Actually, we had a hackathon in June and you're looking at this hackathon and the edit support of Java 10 and 11. But this support is available only in preview mode and non-tool changes have been integrated to the master branches so far. Because we've seen performance-wise the hotspot versus open J9 on 8 is pretty good. But in general, Java 9 is even better in performance across the board. So hotspot's better than Java 8 on Java 9. And it's, the memory footprint and startup is just better. So it is really worthwhile moving to 9 as soon as you can or the, should we say, this new structure JVN or it's 10 or 11? Given the timing, we'll likely move straight to Java 11. Yeah. Yeah, Java 11 offers even more features in garbage collectors. So yeah, I think that we will be able to benefit from it. But of course, for me, regarding the immigration, one of the issue I want to address is Docker packaging. Because once Oracle stops shipping official distributions for OpenGDP 8, you may have problems like mark reference in the given. So yeah, if we have options, yeah, I'm really happy to consider any options waiting on the table. And one other thing I was gonna say that might have been an issue when I tried it with Jenkins was when I went to do a Control C, I'm not sure it was responding. So I know this is sort of native signals, possibly considered unsupported features, but are you seeing much with people adopting it where they're using features that perhaps they get for free but aren't really part of the Java spec? Well, there's always gonna be bits around the edges. We know that whether it's gonna be that sort of thing. But that particular, I don't know why that would be anything other than built in. So if you've got, if it's repeating, if you've got a repeating problem, then we should have a look at it. Yeah, and I'll take a closer look at this, giving it a quick spin. So unsubstantiated, I'll have to say at the moment. The obvious place where you're gonna see differences are going to be where there is a hotspot specific class that we haven't opened to and hasn't got. Now we've been talking about what we should do about that and it's gonna come down to people come around and go, well, can you have this version of the hotspot class? Can you create an alternative? Then we'd have a look at it. We've been trying to create a list of the edge cases. Now, so obviously in general, because of the history of the J9, it's a very solid JVM. So we don't have any problem with it running 99.9% of the world's Java, but there are always going to be edges. And so if you have any, if you've camera across any little edge, then obviously let me know because we're gonna do what we can to fix it. Yeah, no, that's great. Okay, we have another question on the Gitter Chat. Will OpenJDK J9 continue to support Java 8 after Oracle stops delivering the Java 8? That's a good question. Depends on what you mean by support. If you're talking about paid for support, then well, OpenJDK J9 guys won't do that, but the IBM will. If you talk about technical, then I, yeah, I don't know. That's not really an open source conversation. That's more about an adopt conversation about whether they, what they wanna do with Java 8 and J9. The VM is pretty interchangeable. So we would like, and we'd like to keep it that way because it makes things easier to have one VM, whether it were, you know, one binary VM working with Java 8 and Java 9. So I'm gonna hedge and say probably, but I'm not sure for how long. Okay, yeah, that's good to hear. I'm gonna open it up. Anyone on the call, Oleg, or anyone else who has anything else they want to bring up? Yeah, about experimental packaging. So what we have recently done for Java 10 plus support is that we introduced experimental packaging being built on Docker Hub. And for example, if you're interested to have something like that for Jenkins and OpenJ9, it would be possible to establish. Cool, yeah, absolutely. So I mean, it requires to find a place over to do that, but generally we have the infrastructure ready for such experimental builds. Give it up, it would be cool. Great. One other question, as it's going to ask. So someone had brought up that, like Jenkins does a lot of dynamic loading, so perhaps things like AOT optimizations, it might not be able to really take advantage of that. Is, like, do you have any feeling on things with lots of dynamic loading? No, it's, as I said, there's no, from a VM performance point of view, J9 is just as performant as hotspot in every way that we can make it. So there's nothing particular where I go, oh, J9 doesn't work as well as hotspot in some particular circumstances. So I would be surprised to hear if there was that much of a difference. So basically, as far as I can tell, and I used hotspot and OpenJ9, I'm pretty low compatible. Okay, now that sounds good. So yeah, if we have these experimental images, I think that gives us a lot of scope as well to try some of the settings you mentioned, the x-shared classes, x-quick start, and just really start to see how much we can really tune it and, yeah, as I say, take advantage of just smaller images running in the cloud. Yeah, I mean, we have, obviously, because it's got an IBM legacy, we have more documentation than we could throw a stick at. Everything, all of the command line options, with which there are many, there are lots and lots of options for tuning. So what I'd say is get going, give it a go, and at any point where you go, it's not working the way you expected or not working at all, then just let me know and we'll help, because I don't really have any expectation that you should hit any roadblocks. Yeah, now that's great to hear. Yeah. Okay, do we have any other questions, comments? Okay, I'm gonna take that as a no. So yeah, thank you very much, Steve, for coming in and giving us a rundown on OpenJ9. It looks really good. I think we'll be keen to try it. And well, in particular, get users in the community if they want to go ahead and try it. It sounds like it's good to go, give it a try and see how it might improve your performance. And yeah, it sounds like going forward, there's lots of scope for us two communities to work together and just make things work even smoother. Yeah, it sounds really good. If anybody has any challenges, issues, or just questions, then reach out to me on Twitter or any other way you can find me, LinkedIn, et cetera. I will be more than happy to help and answer questions. So thank you very much for hosting. It was great. Great, Anna, thank you. Thanks everybody for joining and we'll be signing out now. Goodbye. Bye-bye.