 All right. So I prepared too much material, so I'm going to talk too fast. So the topic of this talk is Java Futures 2019 Edition. Thank you for catching the typo, Remi. And this is about where is the Java language going in the next few years. So I'm going to be talking about some things that are about to release and some things that are coming a little bit farther down the road. I work for Oracle, everything I say is a lie. Okay. So why do we bother evolving the Java language at all? Java has been around for more than 20 years. And as Mark said earlier, it's been declared dead over and over and over again. And a lot of people are kind of rooting for Java to be dead. And we plan to confound these expectations as we have done in the past. And the way we plan to do that is really very straightforward. Stay relevant. Stay relevant to the problems people want to solve. Stay relevant to the hardware people want to run on. And keep the promises we've made to our users. So there's no big secret to that. It's just make sure people want to keep using Java because it's the best way to solve their problem. So as you heard, we've switched to a more rapid release cadence. And that affects the way we evolve the language in some ways that kind of surprised us. So we obviously have more opportunities to deliver functionality and that's great. But it's also changed the kind of features that we've been working on. It's given us permission to work on some smaller features. When you had a three or four-year release cycle, you tended to focus all of your energy on the big stuff, lambdas, generics, modules, and the little stuff just got crowded out. And there are a lot of good little features that are worth doing, but somehow always took second place behind the bigger stuff. And so we found that the six-month cadence has allowed us to balance out working on smaller features and bigger features, I think, in a better way. It's also encouraged us to learn how to, and we're still in the process of learning, break up bigger features into smaller features and do things like lay the groundwork for future features in a current release, like issuing warnings on something that we think may change in the future so that it's less disruptive when the change gets there. That said, with more opportunities to release, you have more opportunities to release something too early that you're then stuck with for the rest of time. And so there's also a risk that the new release cadence gives us. And so one of the ways that we want to mitigate that risk is to lengthen the pipeline out a little bit. We've shortened the pipeline a lot. We're sort of backing off a little bit. And having each language feature go through a round of what we're calling preview features. So this is a feature that is fully complete, specified, implemented, but the paint's not quite dry. And we want to give ourselves one cycle, maybe two cycles in some cases, to gather feedback from real users to spot things that maybe we missed in our analysis and our initial feedback while we have a chance to make small changes. And so we're calling these preview features. They're not really just data features. I mean, the bar for a preview feature is very, very high. It has to be complete. It has to be fully specified. And you have to have a high degree of confidence that it really is ready. So in 12, we actually shipped our first preview feature. There's a lot of projects that are going on in the pipeline. Mark talked about some of these earlier. There's others that aren't even on the slide. I can't talk about everything that's going on, but one of the things that the more rabid cadence has done for us is our pipeline is better than it's ever been. When we were doing multi-year big bang releases, when we got to the end of a Java 8, we had kind of spent what we had been working on. And then it was a slow startup process to figure out what we're going to do next. With the more rapid cadence, we're able to balance between short-term and long-term work. And as a result, the pipeline is really fantastic. So Project Amber is the sort of umbrella project for the sort of small productivity-oriented language features, a lot of the things that had gotten left behind by the bigger features that we used to work on. I'm going to talk about a couple of those. The first preview feature we're delivering is enhancements to the switch statement. This is also an example of a smaller feature that's sedimented out of a bigger feature. We started looking at the problems of switch when we started looking at pattern matching, which is a bigger feature we'll be working on for a while. And then we realized that some of the pieces could be factored out and delivered earlier and were generally useful, not just in the context that they originally designed, but to everybody's code. And so I think this is a success in a couple of ways. And I'll run through a quick example of it. It's not earth-shattering, but it does address a pain that we sort of all live with every day. So here's a typical switch statement in Java, switches are statements, which means that if you want to use a switch to effectively compute a function, you have to cheat by sticking a value in a variable in each case, and you better hope that you did that for every case. So this is a typical use of a switch to simulate an expression. It stinks in a lot of ways. It's an overly general control construct, which means for the problem, which means that it's more error prone. There's this annoying need to break here. There's this annoying need to say default when we know for a fact that these are the only seven days in the week. It's not the code you really wanted to write. So there's this operation by mutation, yuck. There's this strange control flow. There's the baking exhaustiveness in yourself. And this is what it looks like as an expression switch, which is kind of the code you had in your head when you sat down to write this thing in the first place. You wanted an expression. You didn't want a statement. And you wanted to be able to say, if it's Monday, Friday, or Sunday, then the number of letters is six. And you'd like for the compiler to know that, well, day is an enum. You've covered all of them. Why make me write a default clause that throws a, I can't find my hat exception when the compiler can darn well put that in for you. So this is a simplification in a lot of ways. It's less typing, yay. But the real benefit is it's less error prone. It's clearer. It's closer to the code you had in your head when you started. And we've actually done this as two separate enhancements to switch. One is it can either be a statement or an expression. And the other is a sort of a streamlined control flow where in the very common case where each case has one action or one value that's associated with it, there's a simpler control flow where you can just say case, value, arrow, expression, or arrow statement, and not have to bake back in the breaking yourself. Oops. OK. And you can mix and match these. You could use one or the other or both. The example I just showed you used both, but you could use the benefit of the streamlined labels with an ordinary switch statement would fall through if you wanted. OK. So the switch expression feature sort of sedimented out of this bigger feature, which is called pattern matching. Pattern matching is a pretty deep feature. I'm not going to be able to do justice to it in the time I have. So I'm just going to try to give you the flavor of it. And an example of something we do all the time is test and extract. Does this object have this characteristic? If so, do something to extract a certain value, cast it to something, pluck out its fields, something like that, and put them in variables so I can use them to do something with that. We do these things all the time when we program and we do them together. And it would be nice to fuse these into one operation because they are logically one operation. So when we say if object instance have integer and the next thing we do is cast it to an integer, that's really disappointing because what else would we do next, right? The only thing we could do next is make a mistake by cutting and pasting from somewhere else and casting it to the wrong type, right? So this is not the language helping you write error-free code. It's the language like daring you to make a stupid mistake, right? And so that's not a great way to do things, so let's make that better. There's a lot of ways that this particular problem can be solved, but I think pattern matching is more powerful than most of the others. And basically what a pattern match does is it fuses those three things, the test, a conditional extraction, and a binding into one operation. And so this is what instance of looks like with a pattern on the right side. Instead of saying instance of integer, we say instance of integer and then a variable name. And that fuses the are you an integer? If you're an integer, cast it to integer and stick the result in this fresh variable so I can just use it, right? And this is just a very simple kind of pattern. There are other kinds of patterns and there are other constructs like switch that can use patterns. So it's a feature that goes pretty deep, but even this simple thing will eliminate like almost 100% of the cast in Java code. So that's pretty nice in and of itself. And it interacts very nicely with Boolean expressions, for example. So if you're writing an equals method, if I say if, oh, instance of this class T, if that succeeds, it binds T and then I can use it in the remainder of that expression. So if you look at the control flow of an equals method as generated by a typical IDE, it's all over the place. If this condition is true, then return false. Otherwise, return true. Otherwise, do this complicated thing and return that. It's kind of hard to follow. This is a lot easier to follow. They're equal if. The other thing is this class and his size matches my size and his name matches my name. Much more clear what's going on. So here's another example of using pattern matching in the switch statement. This is the kind of code that we often find ourselves writing. If something's an instance of integer casted to an integer to do this, otherwise, is it a byte casted to a byte do something else? We've all written this code. And it has all kinds of repetition. It has the repetition of the test and the cast. It has the repetition of how many times can I say instance of. It has the repetition of assigning to the same variable. And I hope I'm assigning to the same variable in every one of those arms. But the compiler doesn't necessarily check that for me. And if I turn my case label into patterns, some of the boilerplate goes away immediately, which is great, the redundancy of the test and cast go away. And the code is starting to look a little bit more clear. But if I combine it with what I showed you before, which is the switch as an expression, I can write it like this, which is again the code you probably had in your head when you sat down to write it. So why not let you actually write that code? So when you say case pattern, it combines the, does the thing match the pattern? If so, extract the relevant stuff for it, bind it to variables that have a scope that makes sense. So this is a pretty neat feature. And the pattern's rabbit hole actually goes pretty deep. I'm not going to dive into it because I would definitely run out of time. But it's definitely something that we can deliver in little bits over time. So we'll do type patterns and instance of first, and then we'll probably do type patterns and switch, and then we'll probably do deconstruction patterns and move on from there. So OK, so I'm going to switch gears and talk about a much bigger project that some of you will know what I've been talking about and not delivering for a very long time. And this is the nature of the kind of big research and development projects that are reboots to big chunks of the platform. We've been working on Valhalla almost five years. We've done five rounds of prototypes. We're finally getting close enough to understanding the problem that we think we can transition from the research phase of the program to the development phase of the program. And that's actually pretty good, although some of you are tired of hearing me talk about it. So why is Project Valhalla so important? So the goal is rebooting the way the JVM lays out memory, data and memory. And this is important because in the last 25, 30 years hardware has changed drastically. The relative cost of an arithmetic operation and a memory operation were one to one 30 years ago. And now a full cash mess can cost you 1,000 instruction issue slots. So with the reality changing out from under us by such a degree, it stands to reason the way we were laying things out in memory probably isn't optimal for today's hardware. And if you look at the data structures we put in the heap, there's a lot of little nodes with pointers to other nodes. And those pointers mean indirections, indirections means cash messes. And that is something that has the potential to hurt performance across the board. And the root of this is the philosophy of everything as an object, which made perfect sense in 1990. But the result is a lot of programs are paying for a benefit they're not getting. So if I have a point class and I have an array of points, this is what it looks like in memory. Each one of those elements of my array is really a pointer to an object with a header and a small payload. And so if you look at memory efficiency, I'm losing here because I'm using up a lot of space for headers and arrows compared to the amount of space I'm using for actual x, y numbers. And I'm paying in time because as I walk through this list, I'm risking a cash mess every one I will look at. Now, sometimes what developers will do when they figure this out is they will hand shred their code to try to shred their objects into arrays, which is exactly what we don't want people doing because this kind of code is much harder to maintain. It's less readable. It's more error prone. But it's our fault. It's our fault because we gave developers a choice of either maintainable code or fast code. And developers will always choose the fast code even when they don't have performance requirements or tests or anything like that. But so this is the problem that we get. And this fundamentally goes back to every object has an identity. So this is the data layout we want most of the time. I have an array of x, y points. I should lay it out in memory x, y, x, y, x, y. So the question is, what kind of code do I want to write to get this layout? And our claim is to say that point is a value. A value is an aggregate like a class, but it doesn't have identity. It's just its data. It's just a wrapper for its data. Two points are equal if they have the same x, y value. That's the whole story. And when you tell the VM that, that you don't care about the identity, you're never going to lock on it. You're not going to extend it. You're not going to mutate it. The VM can repay you by saying, aha, I can give you this data layout. In the previous case, the VM always was guessing of like, well, you haven't locked on it yet, but you might lock on it later. And so it had to pessimistically lay things out in a less than optimal way. And so there's a trade that you're making as a programmer. You're saying, I don't need as much from my object model for this particular class. It's just a dumb numeric class. It's a complex number. And in return for giving up that flexibility, you can be repaid with better performance. These can be flattened into other objects, other values, into arrays, denser, flatter memory, better performance. So value types kind of have some of the behavior of classes in that they have fields and methods and constructors and type variables and a lot of these things that classes have. But they have the sort of, when the rubber hits the road, they behave more like primitives. And this is deliberate. We're trying to get the best of both worlds. And our mantra here is codes like a class works like an it. Now, it codes like a restricted form of class. There are things you can't do. But if you can fit within those trade-offs, you can get the benefits. So OK, who cares? Who's this good for? Well, my claim is this is good for everybody. If you're writing an application that's working with large data sets, you get, first of all, much better memory density, and second of all, better locality. So application writers can more directly control the layout of their code in memory. Library writers can do really cool things with this. We can make hash map faster by using values in the implementation instead of link nodes that are full blown objects. And so that means every application that uses hash map, which is every application, will just get faster. So that's cool. Compiler writers love this. Think of all the stuff that the Scala compiler has to do because it's not exactly like Java, and it has to simulate things with objects. Ruby has the same problem. Ruby fixed nums has to be represented as objects. So compiler writers can use this as a compilation target. And languages other than Java on the JVM are probably going to see a much bigger boost from this because right now they're paying an enormous simulation penalty to make their non-Java language work on the JVM. And so I think this is something that's going to make everybody happy either directly or indirectly. So like I said, we've been running this project for almost five years. And in that time, we've built five different prototypes, each aimed at answering a different aspect of the question. Let's hold the problem constant except for this little aspect. We'll do a prototype. We'll see what we can learn from that. And the latest prototype, which we're calling O-World 1, I think we've turned the corner that we've validated the VM underpinnings to get us flattened layout, to get us the JIT optimizations, to get us calling convention optimization, scalarization, and enough language support that you can actually write a program that uses these things. So we're hoping that the next prototype, which is coming in the next year-ish time frame, is something that people in this room could actually try out and write programs with value types and give us feedback. So as an example, here's an example of how this pays off. Let's say you want to do matrix multiplication over complex values. So you do it the obvious way. You have a class that represents complex. You have arrays of complex and two-dimensional arrays of complex to represent matrices. And you write addition and multiplication in the obvious way. And the only thing that's not good about it is look at all that allocation. So you're going to be spending more time allocating than you will be actually multiplying things. And similarly, if you want to implement a matrix multiply, you do it in the obvious way, and you pay the obvious penalties. So we ran this, both the version I just showed you, and the version that was modified by adding one word, which is value in front of the declaration of a class complex, and we saw a factor of 12 performance difference. And now where did we think that performance difference was going to come from? Well, some of it was not doing boxing at all, and some of it was not doing as much in direction so you could keep your arithmetic pipelines fed with data instead of waiting for data from the cache. And if you look at the instructions per cycle metric, you see exactly that. Instructions per cycle in the box version was about one on a four-core machine, and it was almost three with the value version, which meant that yes, we were keeping those arithmetic units fed with data, keeping them busy, and not just having them sit and wait for data coming through the pipeline. So we think this is good validation that we're moving in the right direction, and there's lots more to come here, and so summing up, our pipeline is, the only thing I'm sad about is I can't deliver it fast enough. We have all this great stuff we're working on in so many categories, language productivity features, fundamental VM performance features, native interop, concurrency models, these are all starting to bear fruit, and there's lots of really good stuff coming. So next year, I hope to be talking about different stuff, some of the same, but a lot different, so come back next year, keep me honest, or better yet, come get involved. Thank you very much. Did I actually talk fast enough to fit that in my budget, or were you just taking pity on me? Yeah, that was really, really remarkable performance. Thank you very much. Do we have a few minutes for questions? Go on, let's have a couple of minutes for questions, and then we'll go into the GB meeting. All right. Looks like we have one more time. I'm Joe Java. How do I best help considering I'm not a VM expert? You're not Joe Java. If you were Joe Java, the question is, how can you best help? So the kind of feedback we need is show up on Amberdev or Holodev, try out the prototypes, write a toy program, and say, this is what I was able to get working, this is what I wasn't able to get working. Write tests, identify things that we might have missed, try using these features in your programs, I think is the best way to do it. Because we can think about it and bash our heads against the whiteboard for as long as we want than we do, but we can't see 100 percent of the implications. So we need help from people to point out, here's something I noticed when I actually tried to migrate my code to use this feature. Okay. What's about Project Metropolis? What's about Project Metropolis? Project Metropolis. Okay. So that was one of the features that we didn't have time to even talk about. So Project Metropolis is about adopting components of the Grawl project, in particular the Grawl JIT compiler, and using Grawl as an AOT compiler into OpenJDK. So we've got an experimental version of AOT compilation in the JDK. We have an experimental version of Grawl as a JIT compiler in the JDK, and Project Metropolis is about turning these into something that isn't experimental, putting us on a path where it would be credible to replace the C2 compiler with the Grawl compiler. We're not there yet, but we hope we'll get there someday. I work in a very big company where Java is very popular and also it's very popular and widespread the uses of Lombok, which is a hack to do basic metam programming in Java. Is there any plan to avoid having to use Lombok? Yeah. So one of the features I didn't talk about here was for one of the better term algebraic data types. Some in product types, records in sealed types, there's a lot of different things you can call them. There are a lot of things that we do when we're declaring data wrapper classes that are unnecessary boilerplate, and we care a lot about limiting that boilerplate. Not because we think you should spend less time typing. I mean, you should, but that's not why we do it. It's because having to write all this stuff out longhand that the compiler could figure out on its own is an opportunity for you to make a mistake, right? So we look at boilerplate reduction, not as make my code smaller, but make my code more obvious what it does, make my code less error-prone, and we have a couple of things along those lines in the pipeline. One more. I was looking at your slide. I think would we consider policy of no preview features on LTS release? Preview features in LTS are completely orthogonal. LTS is a support mechanism. Oracle has happened to decide that we're going to make certain support commitments. Other companies can make their own support commitments. Azul could decide that they're going to support Java 9 for the next 137 years, right? That's a choice they can make. Probably will. Oracle hasn't made that choice, but LTS is about commercial support. Preview features is about not pushing features out before we've gotten enough feedback from people who have used the feature in anger. They're just orthogonal. We don't want to make feature selection decisions based on what the support model might be for a given version. Thank you very much. That was justly popular. Thank you.