 Okay, state of open JDK. The biggest change, as I think many of us in this room know by now, but I'll just review it quickly. Anyway, the biggest change in this community in the last two years has been the transition from this old and majestic and slow-moving and unpredictable release model where we ship to release every two or three or five or seven years or whatever it was to this rapid cadence model in which we ship a new feature release every six months, no matter what. The last big elephant was Java 9, JDK 9, shipped in September of 2017, which seems like it was just yesterday, but it's actually quite a while ago by now. The last big release, there were 90 JEPs in this release. There were two major slips to the schedule. It took three years and six months after those slips were accounted for. That's the last big one. After we shipped 9, we shipped JDK 10 in March 2018, six months later, 11 in September 2018, 12 in March 2019, 13 in September 2019. We'll ship 14 next month, middle of next month, and we'll ship 15 this September and so on every six months like clockwork. A feature release can contain any kind of feature. These are not just the old updates of the past, right? A feature, it can contain a language feature, a VM feature, a library feature, and so forth. Somehow I have skipped ahead, sorry. But the important thing that makes this work, the reason we've been able to be this successful so far with this model is that we no longer put features in before they're finished. The feature can only go in when it's nearly done because we can't afford the slipper release to fix some broken feature. So it's a new level of discipline, but it has a lot of benefit. And with another release just six months away, well, if you're working on a feature and it doesn't make this one, that's okay. You're not rushing to get something in that's going to be broken because the next release is three or five years out. So that actually all works out fairly well. There is the question, though, of how long are these things supported? In the OpenJDK community, we update the current feature release for at least six months. So that's two quarterly update releases three months apart. And then every three years, we declare a long-term support release. So 11 is the first long-term support release, 17 is the next one. If you like, you can think of eight up there as the previous long-term support release. Each of these LTS releases will be updated well past the beginning of the next LTS release and possibly even longer, depending upon what the maintainers in OpenJDK decide to do. Now, you might think that the non-LTS releases are in some way experimental. There's these fancy beta releases. It's just early access, but they're not. Every one of these releases is production ready. What differs is only the support timeline. Speaking of updates, where do you get them? You can get updates from a variety of providers, including Oracle. Oracle ships OpenJDK builds under the GPL for the first six months of each feature release, the GA release plus two updates, whether it's LTS or not. After that, Oracle offers long-term support builds, but unlike in the past, they're not free. They're available under a commercial license that allows free use in development and production, but requires payment for use in production. Sorry, for use for development and testing. Let me get this right. However, that doesn't mean you have to pay Oracle for Java update releases because Java is still free. All the code is still under the GPL, and even though Oracle engineers aren't contributing to the OpenJDK long-term support releases anymore, other contributors under the leadership of Andrew Haley, standing back there, are continuing the fine LTS work that they've been doing for many years in the OpenJDK community. As a result, you can get carefully built, well-tested JDK LTS builds in almost any Linux distro. If you're not using Linux, you can get builds from a variety of providers. So we've transitioned to this new release model, big change, but what have we actually delivered since Java 9? Well, it turns out we've delivered quite a lot. JDK 10, Java 10 contained 12 jobs. We actually worried a little bit going into it. Okay, it was only six months after 9. What if the release has no significant features in it? Well, it turned out it was actually pretty rich. We had a new PC interface. We had an actual language feature, local variable type inference, VAR for Java, but it's not dynamic typing, don't worry. A massive refactoring of the source code into a single mercurial repository, which was a long-term play that's paying additional benefits now. As I go through these lists, the items in orange, just for reference, are from non-Oracle contributors. Okay, that's 10. 11 contained 17 JEPs. There's a new HTTP client API, the ability to launch single file source code programs, TLS-13, the Epsilon garbage collector, low overhead heap profiling, and a bunch of other stuff. Java 12, JDK 12, eight JEPs, including the Shenandoah garbage collector, a microbenchmark suite for the JDK, the first preview language feature, switch expressions. I'll talk a bit about these later on and also what a preview feature is. A813 was a little thin, only five JEPs, but it did contain another preview language feature text blocks. A little on the small side, but 14 is going to make up for it with 16 JEPs. A whole bunch of good stuff in here. Pattern matching another language feature in preview, packaging tool, JFR event streaming, non-volatile mapped byte buffers from Andrew Dinn, helpful no pointer exceptions from Goetz-Lindenmeyer records, switch expressions, again, deprecate the Solaris and Spark ports. Bye-bye. Remove the concurrent mark sweep garbage collector. Bye-bye. ZGC improvements. Remove the PAC-200 tools and API. Bye. And foreign memory access API from Mauricio and company. So all this is still in development, but we are really winding down. We're in ramp down phase two. We have the first release candidate build next week. If you'd like to help out, you can get builds here. Please download them, test them, and if something is really wrong, let us know. Okay, that's 14. Well, 15 repos open, zero JEPs so far. There will be some. So far it's routine bug fixes, small enhancements that don't deserve a JEP and so forth, but builds for this are available too. In December, if you really want to be on the bleeding edge, this is the place to go. So one of the main questions that people had about the new release model is, will the non-LTS releases be adopted? Are people just going to move from eight to 11 to 17 and so forth and just skip all these non-LTS releases? So in the interest of gathering data, I would like to do some polls. This will also help you wake up if you're still jet lagged as I am. Okay, we often see people do polls on Twitter and elsewhere on the web about usage. And one of the things that I always find disappointing is they'll just ask, well, are you using version X? But they don't distinguish between production and development, which are very different things. So I'm going to ask pairs of questions. How many people are using a release earlier than eight in production? Okay, he's a proxy for a lot of people. Okay, well, I guess it's nice that there aren't too many hands. How many people are using a version earlier than eight in development? Oh, that's just not... I mean, not because you're maintaining it. Okay. How many people are using eight in production? Okay, eight in development. Okay, that's a fair number of people, but fewer than are using it in production. So take that as a positive sign. How many people are using 11 in production? Nice, that's maybe a third of the room. How many people are using 11 in development for future stuff? And that's maybe half of the room, almost? Excellent. How many people are using 13 in production? Okay. I don't know, six, eight. Excellent. Good on you. How many people are using 13 in development? Okay, a few more, maybe a couple dozen. How many people are using 14 in production? One, two. It's not GA yet. Using your own. How many using 14 in development? Yeah, well, of course. Okay. How many are using 15 in production? This man has many scars on his back. Okay. Thank you. That's useful and actually encouraging information. And it's consistent with some anecdata that we've seen at Oracle. Another thing I'm always skeptical and a little bit disappointed about is when people quote download numbers as a measure of popularity. But here, I'll go ahead and do it. Understanding its anecdata. Take it as you will. Maybe it doesn't mean much. But the download statistics we've seen for the builds that Oracle publishes, you know, in GPL or otherwise, we have seen an uptick in downloads for 12 and 13. So it could be a sign of, you know, there's a bit of a takeoff here. People are realizing, oh, once you get from eight to 11, once you get past nine basically, it is easier to move forward on these six month future releases. All right. Let's shift gears a little bit and take a closer look at a few of the features delivered since 11. And some of the features still in the pipeline. I would like to try to tempt you here to move past eight. You know, at least get to 11. And after that, move forward from that. But before we look at specific features, let's first consider for a moment, how is it that we decide what features to add to Java? Is this a popularity contest? What drives Java forward? And it really comes down to two things. For almost 25 years, that'll be 25 years this May, by the way, Java has been driven by two big goals. One is developer productivity, making developers more productive, helping developers build and maintain large and reliable programs. Java is not about one-off scripts. It's really about building things for the long haul and being able to maintain them for the long haul. The other big goal for 25 years has been program performance. Of course, performance we measure in many ways, you know, startup time, latency, throughput, etc. We measure space, both static and dynamic, and we measure scalability from, you know, iPhones to big iron. Java is meant to span all of these. We pursue these two goals in the face of constantly changing factors, including new programming paradigms, such as mixed functional and object-oriented programming, evolving applications, big data and machine learning. Who was thinking about this, you know, certainly not in Java 25 years ago? Evolving deployment styles to clouds and to app stores. You know, there's no longer the era of Java Web Start, really. To say nothing of the Java plug-in may at rest in peace. And, of course, there's also evolving hardware. We have machines these days that terabyte memories and deeper memory hierarchies, vector and SIMD instruction sets and deeper and more numerous processor pipelines. So goals, challenges. In that context, let's look at some of the major active projects in the OpenJDK community. All right. We start with Amber, right-sizing language is a project being led by Brian Getz has already delivered a number of features. Loom, virtual threads and scalable concurrency, being led by Ron Pressler. We'll talk about that a bit more later on. Panama, new foreign function and foreign data interface, being led by Marie Tochimada More, who's here somewhere. Hi, sorry. And Valhalla, big project to bring new types and specialized generics to the platform being led by Brian Getz and John Rose. So let's start with Amber. Amber is squarely aimed at the pain point of job which is requiring too much ceremony. You have to write too much boilerplate to get things done. The solution that Amber proposes is not to reduce boilerplate exactly, but rather to introduce a series of language features delivered over time that works synergistically with each other that let you express more clearly what you mean. And that in the end will wind up reducing the boilerplate. Okay, let's go through a few of these. Brian covered some of these last year. I'm going to go through them pretty quickly. I suppose you have an enum for days of the week, Monday, Tuesday, Wednesday, etc. And suppose you want to compute the length of each day. Now, you could cheat and do that by invoking two string and string length. But we're not going to do that in the case of this example. We're going to write a switch statement. You switch on the case. You assign to the Stumbletters variable. You have a default. All kinds of stuff can go wrong here. You can forget a break statement. You can forget to assign. You can forget the default. In this case, you don't actually need it because this is complete. As Brian likes to say, this isn't the language daring you to make a stupid mistake. Well, you think about this conceptually. We're just computing an expression. So let's write it that way for heaven's sake. And with switch expressions, expressions you can. There's an additional benefit. In this case, the compiler can deduce from the enum declaration that you've covered every case. You don't have to put in a default. If it can't deduce that, it will insist that you put in a default. In this case, you don't. So switch expressions were first introduced in Java 12 as what we call a preview feature. So a preview feature is a feature that's 98, 99% done, but we want to take one or more possibly, two of the six month release cycles to make sure that it's really baked before we totally commit to it. So to use a preview feature, you have to specify enable preview the Java C command line and the Java Launcher command line. If you don't do that, you will get a helpful error message explaining to you, hey, it's a preview feature. Sorry, you need to be aware that you're using this because we don't want you to make a commitment to something that could change in the future. So that first previewed in 12, it previewed again in 13. We got some feedback from the 12 preview that suggested we should make one small change. So we made that small change, but in 13, in 14 it is now final and you no longer need the enable preview option in order to use it. All right. If you're curious about the history of all this, you can actually read the JEPs. Here's the first preview JEP, 325, the second preview JEP 354, and then the final standard 361 and 361 has a section down here about the history. It explains what has changed, what hasn't changed, and so forth. Okay, let's look at another amber feature, text blocks, also known as multi-line string literals. Feature people have been asking for many, many years. We've all had to write code like this, maybe you weren't admitting HTML, but you were doing something, you needed a string that had multiple lines in it. So you do the double quote, backslash and plus thing, if you've got quotes in your string, you need to remember the quotes themselves and the backslashes, and it's all kind of clunky. So with text blocks, you can just write that. Triple quotes borrowed from Python and other languages. The details of this actually turned out to be quite sophisticated in order to make it intuitive to use. The sort of obvious interpretation of this would be okay, this is a string, it's got multiple lines in it, and there will be all these spaces before each line. Of course, that's probably not what you want. You probably want this angle bracket to be in the first column. So this feature is specified very carefully to let you ignore that, and it will find the common prefix of white space ahead of all of your lines and remove them so that you get, you know, what it was that you almost certainly intended. Now, if you really do need to indent, there's a new convenience method on the string class that will let you indent. Text blocks previewed first in 13. We got some feedback change, them a little bit in 14. They're still in preview mode. They will almost certainly be final in 15. So if you're curious about them, check them out and let us know if we missed something else in this feature. Moving right along, a feature that Brian didn't mention last year. We've all written plain old Java objects. You have some fields. You have a constructor with the obvious parameters. Does the assignment into the fields. You have some accessors. And what did I forget on this slide? Equals an hash code. And two string. Yes. Oh, right. Okay, I got to write those. Type, type, type, type. Or tell your ID to write them for you and hopefully the ID gets them right. The ID can make it easier to write, but it can make it easier to read. So if you have a code that your ID generated and you come back a year later, you have to go make sure that is that actually right because, well, it changes it, whatever. It's easy to cut corners. You know, plenty of people. I will confess, I will occasionally write a pojo in some, you know, quickie code and I don't write equals hash code in two string because I'm just lazy. And all the encapsulation of the class machine you're here is just not necessary, right? Data. We just need a way to say that. And we can do that now with records. So a record declaration replaces all that stuff that's now in gray with that. You just write record.xxxy Curlies, you're done. Because sometimes data is just data. We don't need a whole class. Now, you can customize records. Maybe you want your own hash code because you have a different favorite prime number. Okay, fine. Maybe you want to add additional methods so it's a point. Maybe we want to compute its norm so we can do that. You can write a constructor if you want. Implicit field initialization happens first and then you can check, you know, are your incoming invariance true or not and throw an exception. What you can't do is add more state, right? If you try to add a new field, you will get a compile time error because adding more state would violate the invariant that a record is meant to be a transparent holder for data. Records are immutable. This is sort of a nudge towards functional programming which is generally a pretty good thing. But, you know, if that doesn't work for you, that's okay. You can still write a pojo class. Those still work. We're never going to take those away. Records are in preview now in Java 14, JDK 14. So you can check them out and play with them. All right. Last feature. And this one Brian did mention. Here's another pile of code, the sort of thing that many of us have written. Many times you've got some object of unknown type other than object. You want to test whether it's an integer, a double or a point and create a nice string describing what exactly it is. And each one of these is this really clunky thing. You test whether ob is an instance of integer. Oh, it is. Okay. Int i and you cast it to integer ob. Okay. This is tedious. With pattern matching, you can write this. So a pattern match, this is an enhancement to instance of. You ask whether ob is an instance of integer. So it does a type test or a pattern test. If that's true, then it binds i, which is a type integer to object and does the cast implicitly. So now i is an integer object. You can pass it to string format and your code just got much more readable. Another cool thing is the destructuring. You have a point. It will actually destructure x and y. You can type x and y here rather than having to dissect it as a point and use its accessor methods. So patterns are in preview in Java 14. Unfortunately, no destructuring yet. It's just simple type tests, but there's a long-term plan to enhance this with destructuring and so forth. All of the amber features are designed to synergize with each other. Eventually, patterns will play well with switch expressions. So you'll be able to write something like this, which is actually quite nice. Okay, that's a selection of features from amber. Let's move ahead. So I could say whatever I wanted about amber because Brian isn't in the room. For the rest of the things I'm talking about, there are experts in the room who will please correct me if I get something wrong or stupid. So let's move on to loom. Virtual threads and structured concurrency being led by Ron. And I will actually, at this point, switch computers and hopefully the AV system will keep up because I am going to do some demos. Always excited. Thank you for the sound effects, George. Cool. I love it when things work. All too rare in this business. Okay. And for this, I'm going to sit down. But I'm still in camera range. I'm just going to be the edge. Okay. So let's look at loom. Whoops. Here I've got a very recent build of the loom repo, in this case from GitHub. Ooh, fancy. Let's go into JShell and poke around a bit. So we've all written thread code. Equals a new thread. Come on, Mark. Use a little lambda here. Got a thread. Start it. The print's high. And I'm done. And this thread is still sitting around. So threads in Java, there were kind of this interesting feature of the time. 25 years ago, not that many popular platforms actually had threads, but a lot of people needed threads. And one of James Gosling's observations was, oh, well, we should give them threads, but we'll do it in a nicer language than, say, C or something. So a thread in Java for, maybe not 25 years, maybe 23 years, has corresponded to an operating system thread. So that has benefits in that, well, it's a nice clean abstraction. It's easy to understand. It's easy to relate to what's going on in the underlying system. But they're expensive to create. They take a lot of memory. A thread, by default, comes with, like, a megabyte stack outside of the Java heap, plus some significant space in the Java heap. It's expensive to switch context. You can spend a couple of thousand instruction cycles on the order of a millisecond or two on a modern processor, switching between threads. Because they're expensive, because it's slow to switch across them, for high performance, people who really want the best performance for something like a web server will turn to non-blocking IO APIs or async frameworks or reactive frameworks, which can indeed get very high performance, but they're difficult. It's difficult to write programs that use such frameworks. It's difficult to debug such programs, and it's difficult to profile such programs. As a result, a lot of people don't bother with that, and servers wind up being underutilized. So one of the principal features of, the principal feature of Loom, I would say, is virtual threads. So for a long time, we called these fibers. And we weren't sure, are fibers going to be threads? Are threads going to be fibers? What's the relationship between these two? And Ron and Alan and other folks went through a long series of prototyping efforts, a long series of design analyses to try to figure this out, and finally decided last fall, okay, there's a certain attraction to making fibers be something new and shiny and clean and not saddle with all the baggage of thread, but that would not accommodate all that existing code out there that uses thread. So we decided, okay, fibers are no more. I mean, the mechanisms is still there, but now they're called virtual threads. They use the Java line thread API, but they don't have all the baggage, they don't take all the space. So that's where we are. Let's create a virtual thread and see how that works. The thread class in this build now has a builder. You can request a virtual thread. I'm going to give it a task, which is the same task I had before. And let me just go ahead and start that. And it does its thing, it prints high. And then I get back this, this T now is a virtual thread. It's now terminated, but before it actually ran, it printed no carrier thread. So a virtual thread is virtual because it's not always associated with an operating system thread. It is scheduled by the runtime environment, by the Java libraries in actual Java code, and switched in that context as well. So when a virtual thread needs to run, it's assigned to an OS thread. And then when it's done or when it needs to walk, it's unassigned, demounted from that OS thread and that OS thread is used for some other virtual thread. Now, the terminology is maybe a little weird because, I mean, all of this stuff is virtual, right? I mean, an OS thread is not a real thing. It's just bits in memory. But anyway, a virtual thread is more virtual than a kernel thread. There you go. You can switch amongst virtual threads very quickly in nanoseconds rather than a microsecond or two. They're smaller. They're only a few hundred bytes. Their stacks grow and shrink as needed. They don't take up space unnecessarily. As a result, you can write simple synchronous code that is just as efficient as asynchronous code. But of course, since it's synchronous, it's far easier to read. You don't wind up in continuation hell or these interactive frameworks where one thing happens over here and eventually it happens over there and you can't tell what's going on at runtime. All right. A quickie demo here. Let's take a look using my favorite IDE in Java sleep service. So there's a little demo that Alan wrote. It's this trivial little rest service using JaxRS. It responds to this endpoint. It's called sleep. What does it do? Well, you go into this if nothing specified. It'll sleep for 100 milliseconds. It just does its sleep and returns. Pretend that sleep is an actual blocking IO operation or something. So let's run that with kernel threads using this Vagetta load generator to pound it a little bit. It's running for five seconds and then actually testing for 10. We'll do that. It's going to generate a graph for us. Here are the response times. Now that's pretty ugly. So we expect no response time under 100 milliseconds, right, because it's sleep, because it's kind of implementing. It's simulating some blocking operation. But then these response times are like all over the map. We've got some that are over a second long, because any of these threads are expensive. It's expensive to switch amongst them. And so that's not very good performance. Let's run this again. But this time we will use virtual threads, which it turns out is really easy, because the JETI web server is configured by actual Java code rather than a pile of XML. So what we had to do was go down to its definition of its thread pool and use a virtual thread factory to create the thing. So make virtual. We'll let that run. Dun, dun, dun. Okay. Let's reload the graph. Now it's actually kind of hard to see, so I'll zoom in a bit. The orange is response time with virtual threads. It starts out a little high for you probably because hotspots compiling stuff. There are a couple of blips probably for GCs, but as you can see, the response time is almost always just barely above 100 milliseconds. A huge improvement. So that's virtual threads in performance. I said that virtual threads are also easier. I mean, just easier to deal with. A good example of that is just a couple of days ago, the team got working. Some Java flight recorder events, showing the activity going on in a Java with Loom inside it. Let me zoom in a bit. The important thing to note here is you can actually see, there is this code is running along in this test. It gets to a read of a socket input stream and it blocks. This is a virtual thread blocking. If you had written, there's a virtual thread blocking, but in effect it's doing non-blocking I.O. under the covers because that virtual thread gets parked. The OS thread is used for something else and the scheduler will come back to it when this read completes. You get a nice stack trace because what you wrote was sequential code and now you can debug it and now you can get statistics such as the total amount of time spent on I.O. report. If you were using an async or reactive framework, you would not be able to get any of this information because it doesn't really exist. So that's a big improvement. All right, moving right along. Panama, foreign function and foreign data interface. We all know, I suspect the pain of J and I, it's just painful. You have to write C code. You have to compile it into a shared object. You have to tell the Java runtime where that shared object wound up and so on and so forth. It's just a pain, so we're going to make it better. Okay, so Panama is a new Java foreign bridge. There are essentially three parts to it. There's a Java API for low-level memory access, which Marie Tio will cover later today in detail. I'm not going to talk much about that. There's an extraction tool called JXDRAC that transforms C and C++ header files into Java interfaces that use that API. And then there's a runtime binding mechanism that synthesizes implementations of those interfaces. Okay, so let's look at a quick example. We are probably all familiar with getPID, the simplest possible Unix system call. You invoke it, it returns your process identifier as an int. Suppose we want to invoke this from Java. Okay, I know the API has essentially a getPID method, but we're going to do this anyway. What we can do is run JXDRAC and I need to pass it. No, I don't need to pass anything. Unistandard.h contains the definition of getPID. It's using incubator modules, which are kind of like Preview Feature, but not quite. We got some source files out of it, and if we look at that file, that actually looks kind of familiar, a bunch of the stuff that you find in Unistandard.h, including getPID right here. So for getPID, it defines a method handle, which can be useful. More importantly, it defines a static getPID method that goes and invokes some runtime machinery to actually cause this call to take place. All right, make sure we have the right build here. Yeah, built a couple of days ago with a last-minute fix from Retio. Thank you. Let me run, actually, first I need to compile. And since this is an incubator module, I have to add it explicitly. It's not included by default. So I compile that stuff in GenSource. I get some classes out of it. And now I will use the source file launcher, one of the under some features of the platform these days. Foreign, but first I should show you getPID.java. That would be brilliant. It's really simple. It invokes Unistandard.h, getPID, boom, done. And for reference, we'll use the actual Java API to print out the process ID. You can run this, and what happens? Boom, it gets the process ID. That was so much easier than JNI. Yay. Okay, so that's Panama. How much time do I have left? About eight minutes. Thank you. Let's move on finally to Valhalla, value types and specialized generics. Valhalla is motivated by this. 25 years ago, processors coming at the time had maybe one level of cache, and if you missed in that cache, the penalty might be 100, couple hundred instructions. No big deal, right? Modern processors, including the processor in this laptop, have like three levels of caches, and if you miss in the third level cache, your processors are going to sit around for a couple thousand instruction cycles waiting for that cache miss to be serviced. So the cost of a cache miss has increased. It has significantly. Why does that matter to Java? Well, Java, as we all know, object-oriented language, right? What do you get in object-oriented language? You get pointers. You get objects, and objects contain pointers to other objects, and you follow a pointer from one object to another, and another pointer to another object, and you do that enough times. Eventually you will blow out your cache, and basically cache misses lead to slow performance, and slow performance leads to pain, and pain leads to suffering. So chasing all those pointers is costly. One of the reasons there are so many pointers in Java is because every Java object has identity, right? You can always distinguish one object from another, which is generally a good thing. Every object has state. Every object has potentially a synchronization monitor. You can synchronize on any object. Why? Well, because. How useful is that? Far from clear. You can't. So this gets really painful, especially when you've got tons of data, like in a big data app or something. So we think the missing abstraction here is what we call value types or inline types. That's the ability to declare pure data aggregators that don't need to have identity, but still consider them as being defined by a Java class. As John Rose likes to say, if we get this right, you'll be able to code like a class, but it'll work like an int. And that enables cool stuff like allocation. You can allocate an object in processor registers. And if it never needs to hit the heap, it'll never hit the heap. But it'll still work mostly like an object. And furthermore, data structures can be flattened. And that reduces pointers. So, quick example. Let's multiply some matrices. What one does? All right. Source main Java Valhalla. We've got a complex class. Woo-hoo. I wanted to write this as a record. Obvious thing, right? But records aren't yet in the Valhalla repo. They'll be merged probably sometime soon. Complex class has a real and imaginary. You can add them. You can multiply them and so forth. The actual benchmark here. Whoops. Standard cubic algorithm for multiplying. You go down the root, the rows and the columns, point-wise, multiplication, addition. You get the result matrix. And this is structured as a Java microbenchmark harness using Alexei Shipolev's very fine tool. So there are some annotations telling it what to do. Let me run this now with, you know, just using normal Java code. Nothing special going on here. I'm going to multiply some matrices together. JMH will measure them. We do some warm-up iterations. Yes, I should have done this demo. So it's multiplying matrices here, but my laptop's not working very hard. You can't really hear a fan or anything. Silence, right? So definitely silence. There's a reason for that, which we'll see in a moment. Okay, let's go back into Emacs and look at that log. Okay, at the top of this log we have the usual warning from Alexei. Think hard about your numbers. Don't want to make stupid conclusions. So multiplying, we are clocking in at 1,133 milliseconds per multiply. Each one of those multiplication operations is allocating almost two gigabytes because it's making a lot of complex numbers and those are read-only objects and there are arrays pointing to arrays and those arrays are pointing to complex objects. And instructions per cycle, a very, very interesting metric, is barely over one. So the processor in this machine is an Intel Skylake. It's capable theoretically of retiring four instructions per cycle, but the processor is spending most of its time waiting around from the memory subsystem, so it's only retiring a little over one instruction per cycle. Go back to the complex class and make one very simple change, public inline class complex. So this is now an inline class, what formerly was called a value type, which means when you have an array of them, rather than having an array of pointers to complex objects, you have an array of complex numbers all represented in that array as the real and imaginary pairs right there with no pointer to chase. We'll run this again. Now remember the number from last time was a bit over 1,000 microseconds per multiply. Now we're down to about a tenth of that. I mean, this is the warm-up iterations, but you can see it's quite a bit faster. And as this progresses, the CPU is working harder, as we can tell by listening to the fan. You hear that? So it's not wasting as much time in the memory subsystem. And I am now... Oh, I have one minute. Let's just do the quickie log thing. We go to the end. Here's our time. 132 instructions per cycle. We're getting two and a half, quite a bit better. And if we do a quick division here, 132... Eight and a half times faster. Not bad. So that's features. You can get builds for many of these things at JDK.java.net. So please do test them out. Don't believe a word I've said. And thank you very much.