 So, I think I'll give you in the 25 minutes I have, ha-ha. But really, I'll give you in the 25 minutes? Yeah. Oh, wow. I'll give you more now. Say, seven lightning talks, how about that? And the slides will have about 40 more minutes of audience. So it's really, but luckily, you don't have to listen at all because I'm not telling you anything you can take with me. So, the first lightning talk is the Nasa nutrients. What's been going on in the VM? And this is a slide I recycle on these events from GVMLS to get to me. What should the VM look like? What has it looked like in about 20 years? What will look like 20 years from now? These are my speculations. And also aspirations. It should continue to have a uniform model of that satisfies objects and interfaces and arrays and primitive in a way that you work with them all and even like the narrow algorithms over them. That was very imperfect in the beginning of the JVM. The JVM still has this hard division between primitives and objects. But we're working on making that even better. We also, of course, want there to be efficient use of whatever hardware resources are standard, which today means catalogs of approximately 64 bytes that are prefetched and are weekly ordered developed for each other. So we want to be able to have flatter data. So that's one of the, I'll refer to you a little bit later, but we want naturally localized data structures that work well in that way and that don't waste a lot of space for house key functions. Of course, we want not only the data to be nice and tight but the code to be nice and tight. We don't want to run some kind of a bytecode engine with lots of simulation overheads all the way down. We want the VM to drop your code right down the pocket of the metal and run it as fast as possible. One thing that will be very different, I think, about 20 years from now versus 20 years ago is threads will no longer be the cool thing. Java was an early doctor of threads and now threads are showing their age. We want something finer grained. We also want interoperability. Java started out as a set top box and safe execution environment security and employer guarantees. It still is that, but it has a more forest boundary nowadays and the ceremony required to tie up your native code and data with JNI is too much. We want to do better than that. We want the VM to continue to be broadly useful. We want it to run whatever the modern languages are that are significant today, like in 20 years from now. We want the JVN to run it. That will probably include C and C++ someday if they're modern 20 years from now. And then finally, looking back, we want the JVN to continue to be compatible. We want it to run in dusty decks and perform it if you want to get it, like I said, lower down in the hardware. Whatever the new PUN are of the 20 years from now we want the JVN to run as well. So this is our top level goal. This is to keep the JVN relevant, applicable and useful in a word, vibrant. How do we make things vibrant? Today, well, today's workloads are pretty cloudy so we're paying a lot of attention to cloud execution models. We want also, of course, to be compatible with apps together. One of the reasons I think that Java has done so well in the long haul is in commitment that the emphasis is a sort of culture of bias or some compatibility and we want programmers' code to continue running and we don't want to have them to re-engineer their code every time they recompile it and then pull chain C++. So I think that gives us a certain amount of trust in the community and you can see the last half of that with the JDP9 people worrying about the adoption problems. I will say we tried really hard and I think we succeeded well in making the compatibility at a high level with the JDP9 where we had the breaks and things. The JDM has always been fairly reliable. We want to be more reliable. We want you to trust it not only with your code but also with the execution of the data. We want it to have good tools and of course we want open source collaboration. The JDM and Java deserve to be something that's developed out in the open and sunshine. What's developed out in the open and sunshine should be the best tool chain at one time for running code on the planet. It should give you the best access to your metal. So we can basically run the world on Java. I would guess that we want to care a fraction of the world now on Java so let's keep going and double down. Now in order to make this work the VM in particular has a whole ton of technical initiatives that are working. This slide just gives you a snapshot of it and I'm not going to explain it. I'm going to move on and tell you in detail I trust you will be able to see the slides after the fact. I'm relying on that. I want to give six lightning talks in a very short time I have remaining about the trends that I think are interesting and relatively new in some ways. Java on Java as exemplified in Project Metropolis which we've already heard of. Platzi or Criminin. So I'll explain that in a second but that's value types. Richard Leiger's support. That's some really interesting little books that we're doing in collaboratively we, in collaboration with the language nerds that Brian gets into. We have fruitful conversations with them between Project Amber and about Ally and then of course the hardware access in Panama that Marco alluded to is another interesting trend to watch. Concurrency getting more truthfully and granular better and full of race conditions. That's trend number five. I think we're going to get better over time and also let's scale up to bigger workloads and there's some really exciting stuff happening in recent months and coming months with some scaling stuff. And all of this I emphasized if we want to happen in the sunshine in the open with the community this is something that we want to be collaboratively and quick-quit again. So trend number one. Java hosts more of itself. As Mark pointed out the brawl jet as an experimental feature is coming to a release near you. This is just the beginning of a series of experiments where we think we will eventually be able to tear ourselves away from that C++ code we love so much and code more than Java. So starting with brawl code generator which is an amazing piece of technology. It has actually quite long lineage it's been much of it's been fostered by either some of it in some labs and some of it in much of it at Johannes Kepler University in Austria. Deep deep research the scales here are really quite impressive. We're looking at 10 years 10-ish years of calendar and just over that period of time probably hundreds of staff years of engineers working away many of which are grad students now community as professors or researchers. It's been an amazing run and now we we've used brawl as an engine for AOT. That was our first piece of brawl in anger in OpenJDK for the JDK itself and then also it's candidate for a proper protocol you can do this as Mark mentioned. So those of you who have experimented with this and worked on it and in hope tried out brawl as a jit for the future like there is yeah there's more to come join us. Brawl is also good with interpreters and scripting languages but you'll have to look elsewhere for information about brawl code. Metropolis is the experimental clone of JDK11 whatever the current JDK is where we try to make Java implement itself. We bootstrap Java on Java and we nudge out the C++ code and the assembly code that we've been pending on and replace it by up level Java code. This requires a certain kind of system Java to be able to replace the C++. We are experimenting with an engine called SDM SubstrateDM that will give us the ability to shrink down closed Java subsystems into code that is similar to what C++ compilatory creates. And so that will lead us into the world being able to take discrete modules from hotspots like maybe even the verifier and replace the system Java and replace the C++ code. The big one, of course, is replacing C2 but C2 is just the most optimized encoder and we also want to be able to replace C1 and the interpreter and various subs so that we can have a system that is running on one code generator. And of course, last month we discovered what happens if you have too many code generators and then you get a hardware bug that requires them all to be upgraded depending on just one code generator. And that's the digit for the Java ecosystem is to have is to own our own code generation rather than get it from our excellent colleague said with the LLVM. So, tomorrow's reference implementation, more Java, Java on Java. And here's a roadmap, I won't go into the details, but we think we can make this work and we're getting step-by-step closer. Next, let's talk how the primitives get classier. Of course, they're very good, but maybe we can add some class to them. The big idea is value types, of course. What is a value type? Well, it's a thing which is like an object, but it's also like an int. It has the good features of both. And mixes them together in a new way that you can't just get for today's objects, today's ints. So we want to heal the riff between those primitives. Always felt like a sort of unnecessary but bolted on a part of Java. More like that, more of the classes. So, the trick here is to distinguish a new type of class, called the value class which is distinct from an object class. And the value class defines things like ints, which have interesting properties I'll tell you very quickly now. The basic flow-in is that it codes like a class, but it works like an int. In other words, there is a value class with methods, with fields, and all this stuff. But, in the end, it sort of packages up and scrunches up at this unit of stuff which behaves like an int. I'll tell you in a minute how that sort of behaves like an object. This project also requires something really, I mean, that's hard. What seems harder is doing parametric polymorphism, full polymorphism, generics over not only objects but also values including primitives. So, list of ints. Yes, that's coming. And list of complex. Where complex is a unified type. Good stuff. This will require, oh my gosh, this is the biggest change we've ever made with ICOs. And I hope the last big change we ever did when we say that. In any case, it will require huge changes to tools and VMs. And VMs, VMs are ready for this. The effect on our source code is that it's probably going to have a similar effect than the generics had on our source code, the lambdas had on our source code. In other words, we will have to learn some new things. Here's how I code the value types. But the benefits will, I think, be comparable. Basically, classes are really great for correct encapsulation. And the system that Java has at interfaces is a really good way to talk about classes without knowing much about their implementations. And we believe we can spread that uniformly to value types as well. And once we grant fathering on these primitives as value types, then we will have in the team. So, if we bring the two sides together, then they both get stronger. Primitives, then, from this point of view are really just a distinctive kind of a class. I hope someday we will actually write some classes for int, flow, boolean, etc. that will show you what their intrinsic state and methods are. And those will just be value classes. And yeah, they'll have a special cleaning in the VM. But that won't be what you care about. Here's a bit of theory about, which I'll give you just a case. Here's what's different about value types from value classes. The JVM is encouraged and is able to flatten value types. In other words, if I have a containing type that has a feel of complex, maybe it's even complex flow, and then that complex flow propones that will be into the containing object. That's called a flash. And arrays of them will be flattened in a similar way. The VM isn't required to do this, but it's allowed to and encouraged to. That allows us to get that cash-friendly data structure that we're after. In order to do that, they must be non-nullable. Null doesn't hit with int. You need 33 bits if you're going to have int or null. That's a type, right? So 32 bits, that's a non-nullable int. The same way for value types. It also has to be identity-free. You can't say which 42. I want to lock this 42. That's not going to be allowed. So again, the VM does have little pointers to things holding value payloads. Those have to be completely isolated and made invisible to the user. Interesting challenge. So they're flattened. They can be buffered. You can refer to them internally in the VM by pointers, but pointers are never significant to the user. And therefore, you can re-buffer them. You can take the same value and you can buffer it in two places at best. Or if you see the same value in two different places, you can maybe on a GC pass de-duplicate them. All those things are allowed. Once identity is removed, interfaces apply equally to value classes and object classes, which means an interface code can work equally well on values and objects. And object itself is an honorary interface. That's a piece of forced fit that we have to do in order to make object be a value of potential value. Basically, object is sort of like comparable or runnable. It's a abstract thing. And that gives us the ability to work conveniently with value types in today's erased generics. Although we want to do more. We want to do parametric polymorphism. And work across all values. But at the same time, they allow you to compile specialized code that is optimized for the very particulars of a particular value type parameter. So if I say list of complex flow, I want the internal layout of that list not to box each individual complex or each individual flow. I want it to have some sort of flat array with real flow, memory flow, all laid out in a small set of cache lines. And that's just the starting point. There's a lot more to say about it. But what we need is something like simple plus templates but more dynamic. And in order to this mechanism will give us a lot interesting plays once we get it online. Okay. One way to think about this is remember that when you're coding generics your generic types T are in some sense universal. And the T is not the same thing as an object type or an interface type. It's a type parameter type. And there are tricks in the T that you can't do with a plain old interface. Same will be true for these new enhanced generics. This is hard. I won't tell you why but that's the slide. Okay. The full parameter of polymorphism is you can write generic algorithms over all of your types, not just the object types. All the values primitive and in reference types. And so you can have, have you noticed those places like in Java, Utah, the rays where there's seven or eight different kinds of methods for each primitive type and the overloading makes hides the shame of it. But if you look at the source code you're going to fix that, right? We're going to be able to categorize over all those types and have a true generic sort algorithm, for example. I think that would lead us to a better kind of generics, better kind of algorithm on the rays. One thing important to note is that we do not intend to expand these things statically. We intend to expand them dynamically and in a sense only when needed, although AOT will be able to pre-provision templating the species that are necessary. So, I'm going to skip that last one. Okay, what about the species? I know the species as well. Species is the combination of a template which has formal render holds and common killings for those holes causing a particular template to be expanded into species. So that's the basic idea. The VM has to be deep in the game on this so it requires deep cuts in the VM. The hardest problem is to arrange things so that you don't have to have a million copies for every different species of the code if you want a shared code if possible but when it gets hot, you want to specialize it just like it does today. Interesting thing, number three in my last six minutes. So, amber is, as I said, a sort of handshake between the languages as I was in the VM. There's a number of little toolkitty things that are really cool. One of them was just coming in sailing towards 11 and that's a part of a trend that I call bootstrap methods everywhere. Bootstrap methods turn out to be a good trick so we're going to use them like crazy. So we have a bootstrap method at call site that's a constant pull, that's condi. A bootstrap method in a method definition that doesn't have a cute name maybe Mindy, but what it really is is a method recipe that expands on use and produces the body of the method. Can you imagine writing your two-screen visualization boilerplate that way and then you don't have to load it and just expand it when you use it. So other pieces are nestinates and sealing. You can sort of watch the Alalists to see the smaller features that we're doing for support of language stuff. So what this leads us to is a situation where when the language designers tell the VM designers what they need, the VM designers can give them the right generic mechanisms that focus for new kinds of translation strategies then the language designers can do really wild new features but without increasing the bytecode size of their classes. You might have noticed that here if you're kind of Scala, that one of the downsides of Scala is that it can't use weird old VM features under the covers that have to use standard bytecodes but Scala can turn into 20 different class files in your jar. Each one of them has a significant size because of all the little adaptory classes that are needed. We think with the Lootstrap method trick we can avoid doing that and making it much more dynamic. Panama has already been discussed and there's some interesting information here on the slide deck for you so I'll let you look at that but the basic idea is BetterJNI who want to use the Java code when you brought in and basic locks in right in there with the Java code so basically you get inline calls to printf and whatever other Unix primitive you want to call. The idea that I'm making this work is the binder pattern which knows how to wire up foreign calls to whatever the low level facilities are in the JVM and again you don't have to pre-compile a little of the stuff because you just let the binder be before you know why. But in essence it will give you more direct access to your code and data at the C programming level and even at the assembly level. Speaking of the assembly level Intel has worked with us to produce a really wonderful proof of concept called the Vector API where basically you write a Java loop with Java objects to Java interfaces, Java generics to ADX code that you've written in the assembly by yourself and would have been proud to write it as an assembly programmer but it comes through Java and it's great. And we think we can expand that story to platforms beyond Intel including GPUs. So there's some references for Panama, Loom, Awesome, Fibers we need more Fibers in our diet we need fewer dinosaurs to strain over we want to break up the live parts of our computation into small independent bits that are mounted on a dinosaur thread and a gallop along for a millisecond then they dismount and then millions of other Fibers are also mounting on the same dinosaur thread over time. So the ultimate goal is to have millions of little concurrent units and just one just a small number of server threads. Again this requires tricky cuts in the JVM interpreter because we need to be able to quickly mount and dismount our computations on threads. It's like a user level scheduler for very tiny threadlets. I already explained this so I'm going to go on and Ron Kressler is working on this since last year and he was extending his work on Quasar which was a user mode above the level of VM. Now he's naturalizing the idea of these interuptable continuations on the inside of the JVM itself. It uses a low level concept of continuations on which you build a high level concept called Fibers but you could also build generators and other concepts in the future. Finally our scale will get scalier and I'm telling you it's almost out but this is an easy point to make. So we have new pointer base to read and read barriers in private Canada, really cool. It's a really low latency, and guess what? Here we are, another project that sets the mountain float itself and it uses a different kind of a re-bearer. So now we have the fight of re-bearers. You can use brooks pointers or you can use bit-based branches. Oh no, how are we going to decide this? There's too many new technologies floating around but we can handle this, we're a community. We have some short term tasks to work with this besides the individual project work. Make sure that we refactor everything that's common to the two projects into one source base and upstream that stuff so that when you eventually do dock these two different algorithms in it's not a huge amount of work to have one or the other and then even both live in the same in the same system. This enables not only coexistence but also stealing of good ideas from each other which is a great thing in the community. Long term result, like Mark said we want to unify low latency DC technology and it's a good experimental a good bench for experimenting. So fibers also are a scaling technique which is obvious to all of us and another technique for scaling is the curvision that you get from an app CDS and AOT that's for container scaling and finally I can't talk about racing data because I'm out of time to look at my JVM language summit talk of last year. I think we are in for a new way of looking at managing races in the job of memory model. So, lots of crazy stuff can't look at that but let's go there as a community.