 to have David tonight. So David actually came to Singapore one year ago. He was actually visiting and speaking for a conference. I think it was for Oracle Worlds, if I'm correct. And yeah, it was pretty awesome. It was a really good meet-up with him and we thought to invite him again. And so today like we'll be more talking about core features of Java, like what to expect in the future and what we have in the newest versions of Java. So we are very happy to have David virtually today and hope to have him in person again in the future. Thank you David. Thank you. Well in fact I was supposed to come back I think in June. But yeah anyway thanks for having me. So despite, sorry, despite the situation I'm very happy to be able to give these remote presentations. So the title of the session is Java and the 40 version. The thing is that more and more I realized that given we have accelerated the cadence of the Java release. People are somehow confused about what's in Java. So today what I want to share with you is basically what features will be added in 2020 in the Java platform. Why 40? Well I'm friend-speaking and I always confuse 40 and 14. So given that we have just released a version 14 of Java that is really what I'm going to talk today. Having said that I will also discuss about Java 15 which will come later this year in 2020. So this is a standard disclaimer from Oracle. Don't make any purchase decision based on what I will say today. Having said that everything is open source. So we're good on that side. The only thing that you should keep in mind is that anything that I say about Java 15 can in theory still change. I mean Java 15 will be released in September. So between now and then there might be change. That's the only thing that you have to keep in mind about this disclaimer. Okay May is a very important month because we are about to celebrate the 25th anniversary of Java. So Java has been released. The first release has been released 25 years ago. So we are just about to celebrate that anniversary. The thing is that Java keep evolving since 25 years based on two core principle. The first one developer productivity. Second core principle is application performance. This has been done through the last 25 years in the face of constantly evolving things such as programming paradigm. For example, 25, 20 years ago we're not really talking about any kind of functional programming when it was when it when we're talking about Java. That's something since then that has become more and more important. Something else that evolved is application style. In the beginning we were mainly talking about clients of our application. We're talking about model application. Those days obviously it's more and more about microservices. So this is yet another evolution that Java has to cope with deployment styles. In the early days we were deploying in our own data center on large server. Those days we tend to deploy using containers in the cloud. So that's another big shift when it comes to the way we deploy our application. And again, Java has to cope with that. And last but not least, obviously the hardware is evolving. So those days, for example, we have more and more cores in our machines. We have more and more memory. We have vector support directly built into general purpose computers. We have multiple level cache when it comes to memory and so on and so on. So this is basically how Java has evolved for the last 25 years and this is how it will continue to evolve. So this is a pretty busy slide. I'm not going to spend time on this slide. This slide basically lists all the features that were added in Java 9. And Java 9 was a special release in the sense that it was the latest large release of Java. And there's a big issue with that. So every two to three years we were releasing one Java version with a bunch of features. So when it comes to adopting those features, it was very difficult because the developers basically had all of a sudden access to a bunch of features. So getting familiar with those features was very difficult. So we decided to change the way Java is evolving. So now it works. This is something that we have put in place at the end of 2017. So every six months there is a new Java release. So it's called a feature release. So 11, 12, 13. The current Java release is 14, released in March. And in September 2020, we're going to release 15. That's a given. Six months later, 16 and so on. Now all those feature release are open source and are supported until the next release comes out. So 9 was supported at least until 10 came out and so on and so on. So that means that today 14 will be at least supported until 15 comes out. And 15 will be released in September. The current release will be 15. And that's basically if you're on the open source side, you should use that version because that's a version that is supported. Now we also acknowledge that there are some users typically enterprise, they are not able to move that rapidly. So moving for 14 to 15 is in itself not a big work, given that there are not that many that many features between all those release, but still there are there are some type of user that prefer to stick to one release for many, many years. So that's why that's why we at Toracola have decided to have long term support release. So basically a long term support release is nothing more than a given feature release that we take and we maintain for a very long time. 11 is the current LTS. The next one will be 17. So those release will be supported supported for many, many years, despite the fact that obviously we will still have every six months a new features release. So basically, it provides choice. Either you use the open JDK build that Oracle is providing, they are free. The only thing is that if you are using those bills, well, you'd better keep with the Java release cadence. So right now, ideally, you want to be on 14, because this is the release that is supported. And that's also the reason that is getting the security updates. If you are not able to move that quickly, Oracle also sells support for Java. That's the Oracle JDK. So when it comes to buying Oracle support, there are two things that you should look at the price of that support. Honestly, the price of the Oracle support for Java is pretty cheap. But I will let you judge that. The only thing that you need to look at when you decide where you want to get your support for Java is basically the ability that the organization you are looking at is able to support you. And this slide shows the number of issues that were fixed in this particular example in JDK 14. And we clearly see that Oracle is clearly the company that contributes the most to Java. So the takeaway here, Java is still free. There has been a lot of confusion regarding that, but Java is and remains free. So now let's quickly discuss about how can we enable faster innovation within the platform. So the first thing that we put in place two or three years ago was this year release cadence where every six months we have a new release. We also have the JEPPS mechanism. So JEPPS stands for JDK enhancement proposal. So it's basically a mechanism that we use to introduce in the platform new Java language features, new JDK features, or even we're using that process to remove things from the platform. Or we are also using that features, for example, to evolve how the open the open JDK project is managed. So it's basically some kind of lightweight mechanism that clearly, that is clearly documented and tell the community how things are supposed to work when it comes to doing something non trivial, non trivial into the platform. Next to that, we have also put in place multiple feedback feedback mechanism that we are using to get feedback on non-final features. So the thing is that whenever we put something into the platform, as soon as it's final, it's something that is there forever. So it's basically, it becomes permanent. So we better get it right before we turn something into a permanent feature within the platform. So for that, we have multiple mechanism that we can use to basically give to developers non-final features. We encourage developer to use those non-final features. And based on the feedback, we can still do adjustment to those features before we make them permanent. So we have the preview features mechanism, which is used more for language Java language features will serve experimental features, which we use mainly for hotspot VM features. And then we have additional features, sorry, additional mechanisms such as incubator modules, early JDK access build that we use to basically give access to, to prototype of new capabilities that we are thinking of adding into the platform. And last but not least, we have an ongoing open JDK project that is named SCARA. So the goal of SCARA is to investigate alternative to material. So if you are look, if you know open JDK, you know that for many years open JDK, well, in fact, since the beginning, open JDK has used a material as is as its source code management solution. It works for many years, but honestly, material is a bit tough to learn. So if you want to encourage more contributions, well, we'd better look at alternatives. So that was the goal of that project, look at alternatives and the outcome is SCARA selected Git as the alternative. So that means that all the open JDK development is moving to Git. SCARA has also looked at hosted Git providers. GitHub has been selected. But clearly SCARA and open JDK is not tied to a GitHub. So if something goes wrong with GitHub, we can easily switch despite the size of the project for different Git providers. And last but not least, SCARA has also looked at how we can improve the complete development life cycle of open JDK by having on top of Git some additional toolings. So a bunch of open JDK project have already moved to GitHub. We have the list here, unburst, SCARA, GMC, LUM, and so on and so on. And all the rest obviously are planned to move. In fact, we plan to move JDK itself, hater around I think end of JDK 15. So that would be still in 2020 or around early 16. So still in 2020. But still, all the projects are read only mirrors on Git apps. So basically, all those all those bullet points give us the ability to enable faster innovation within the platform. Something that we have already used and we clearly see the benefit of all of those tools. Sorry. So delivering faster. So we have enabled the ability to deliver to deliver faster. And well, let's look at what we have delivered recently. When I say we, it's really open JDK community. Obviously, Oracle is a big player in that community, but it's not just Oracle, right? So Java 10 delivery in March 2018. Those are all the features. I'm not going to spend any time on those release, because we already have enough to cover with 14 and 15. The only thing is that you might see that we have two chips that are in kind of yellow orange color. Those have been delivered by someone else than Oracle. So the one in blacks are coming from Oracle and the other one are coming from other open JDK members. And I believe those two are coming from red. Then we had 11, which was a pretty big release in terms of capability. The thing to keep in mind is that any features release is driven by the dates. So it's either March or September. They are not driven by features. So if a feature is not ready to be included in a given feature release, while it's not an issue, that features will just have to wait the next feature release. 12, March 29, 13, September 2019, 13 was clearly a relatively small release. But yeah. Anyway, 14 that was released two months ago, basically, well, cope with the fact that 13 was really modest. But again, the only thing that did drive those release is the dates, not the content. So today, we're going to discuss some of those JEPs that have been added in Java 14. Again, we see that there are two JEPs coming from non, that are not coming from Oracle non volatile by buffer, I think is coming from redact and helpful new point of exception that we're going to discuss later is coming from SAP. Okay, so Java 14 was released in March 2020. Everything is open source. So JDK.java.net slash 14, you can have access to the open JDK builds of 14. You can also have access to all the technicals content for regarding that release. Now, quickly, Java 15, what do we know about Java 15? Well, first and foremost, we know that Java 15 will be released in September 2020. We also know the schedule. So we know that random phase one, that's basically when we have the feature freeze is in one month from now. So 11 11 of June. So today, based on the information that is available in the open JDK project, we can already discuss about what's being planned within 15. That's what I'm going to do today. Now keep in mind that things can obviously still change. We can add things at the very last minute, or we can even drop things at the very last minute, depending on, well, on some stability issue or something else, things can might still evolve. So those toolings basically gives you the content that is plain for 14. So we clearly, well, it's a very interesting time for Java because we clearly have a very rich pipeline when it comes to features. Why? Because, well, quite a well, I wouldn't say many years ago, but five to four years ago, we have decided to work on very ambitious project, multiple projects that were long terms, R&D project, and they each had a goal to basically either fundamentally improve certain aspects of the Java platform or even revamp a given aspect of the platform. So I'm going to discuss some of those today. ZGC, Amber, Panama, Valhalla, Metropolis, Lume and so on. So very quickly. The first one that I want to discuss is ZGC0GC. So it's low latency scalable garbage collector that we started to work on a few years ago. It was introduced as an experimental features in Java 11. And the main goal of ZGC is basically gives you the lowest latency possible. So it's a concurrent GC, meaning that all the heavy lifting work of the GC is done basically while your Java trades are being executed. So there are a few posts, but they are reduced to, well, to the smallest possible, to the smallest time possible. We claim that the post time should stay below 10 milliseconds with ZGC. But what we observe is that most of the time, the posts are more around two milliseconds, whichever you see is very low. It's scalable in the sense that the post time will not increase as you grow your heap or your life sets. So the 10 millisecond post time is something that you would get typically on a one gigabyte heap, but also on a one terabyte heap. So there's no change in that front. ZGC in the early days was designed for large heaps, multi terabyte heaps, but it turns out that there are use case where it also makes sense to use ZGC for smaller heaps. So one of the features that we have added recently is the support for a few megabytes heaps. I think that the lowest that we can go is eight megabytes. So how do you use ZGC today? So I mentioned that ZGC was added as an experimental feature in Java 11. So you need to unlock explicitly ZGC. So there is a specific hotspot flag unlock experimental VM option to basically unlock any experimental feature of the VM. So you do that. And then you use the specific flag to enable ZGC. So basically to tell the VM to switch from G1, which is the default GC to ZGC. And there you go. Then the thing that you might want to do is tune ZGC to tune ZGC. The thing that you need to do basically is just set the heap size. One of the design goal of ZGC was also to provide a default behavior that avoid any tunings. Obviously, you still have the ability to do more tuning than just setting the heap size. But by default, ZGC should should give you good results with just setting the heap size. So ZGC, what is the history behind ZGC? ZGC was initially introduced in 11 as an experimental features on Linux. We have added additional capabilities in 12 years. In 13, ZGC support was added for ARM 64. And finally, in 14, so the reason that we have done two months ago is adding support for macOS and Windows for ZGC. And the plan is to make ZGC as a production feature. So basically we are removing that experimental flag from the feature in JDK 15. So this year. So this is a picture that I took earlier this year in Sweden. And on stage was Monica Beckwith from Microsoft. And well, that's her claim. It's not my claim. So ZGC shine when it comes to responsiveness. So I encourage you to check her presentation, which is now online, where she goes basically about the benefit of ZGC. Now let's quickly talk about G1GC. So G1GC is the default GC. Obviously, we have made a lot of investment within ZGC, but that doesn't mean that we are not looking at improving G1. So for example, in 14, we had it support for Numa. So Numa stands for non uniform memory access. So basically, that means that some memory might be well, the distance between the memory and the course is not always equal. So from one core accessing a given memory might be more expensive because it's more distant than accesses memory in a different part of form from a different course. Parallel GC was Numa aware since a long since a long time. So in 14, we have added support Numa support for G1. And that's not all. If we look at the number of enhancement that we have done in around G1 since GK eight, it's over 700 enhancements that together greatly improve G1. So the shafts shows, for example, so the shafts below shows the native memory overhead caused by your G1GC for a heap size of things 16 megabyte. And what we can see is that, well, if we look at GDK eight, the extra native memory was around four megabyte. In 11, it was around, well, it was below three, sorry, I said megabyte, it's gigabyte. So eight need an additional four gigabytes to GC that large heap. GDK 11, it was reduced to, I think 2.7 gigabytes. And in 14, it's, it has been reduced to 1.7 gigabytes. So basically, you see that by switching from eight to 14, we have greatly reduced the memory footprint of G1. Not only that, we also improved the performance. So basically, when you put together all those 700 enhancements that improved G1 a lot across all our areas. So throughput, footprint, latency, and so on and so on. So that's something that you need to consider. If obviously, GC, well, the GC, those GC characteristics are important to you, you need to think about moving to a newer version of the, of the platform. So this is another chart that shows some of the enhancements that have been done to G1. So let's see. This is using the standard spec GBB benchmark. So this one is using a fixed heap, fixed heap. So set to four gigabytes. All the results are normalized and higher is better. So we have max G ops and critical G ops. Both are looking at the throughput. The thing is that critical G ops is looking in addition to throughput is also looking at latencies. So we can see that parallel GC has been increased. So the performance of parallel GC has been increased between eight and 14. But we can also see that there is a huge boost in terms of latency improvement when it comes to G1 in 14. I mean said that if we look, sorry, at this slide, at the next slides, we see in this particular case, so this is with a heap of 16 gigabytes that there has been a regression between eight. Well, I don't have the result of G1 here, but we can clearly see that there is a drop. So we had a regression, basically. And I don't remember the exact issue. If you want to know more about the given bug, it's, you just need to check the blocks at the bottom of the slides. But that issue has been solved. So we can clearly see that G1 in 15 will improve the latencies. We see that the throughput is is well, there is a slight drop in terms of throughput only 97% versus 100%. Well, it's a small, it's a small, basically a small tradeoff. But when we see the benefit that gives that it gives in terms of latency improvement, I think that it's fair to say that it's okay to pay that that small price. So quickly, startup time is something that we always look to improve in all the Java release. And obviously 14 is not is not an exception. Now we can see in this case that the startup time for a given it's a small application. It's basically a hello wall application. There is a small improvements. Obviously, the as faster we get, the more difficult it is to find large improvement. But still, between 13 and 14, the startup time has been improved. And I can already tell you that between 14 and 15, it will still slightly improved. Now, this is basically the same benchmark, but with different scenarios. So hello wall application, a hello wall application that use lambda expression, and then a hello wall application that is using a concat string. And again, you basically see that across all the release, we are improving the startup time for those different scenarios. So that's something that, again, is useful over time, whenever you switch to a larger to a newer version of Java. So now let's talk. So we have discussed about ZGC, which, which is one of that those ambitious project, basically adding a new garbage collectors that provide low latency. Another one that deal with memory is Project Valhalla. And that that is a clearly very ambitious project. So the goal of Valhalla is basically to reboot the relationship that the GVM has with the data in memory. So if you know Java, you know that Java is very good at optimizing code. We have, for example, a JIT compiler that will improve over time your code as it runs. So on that side, we're good. But the next step is really to look on how we can optimize data in memory. Now, there's an issue. We have the Java type systems, something that obviously is very powerful, but there is a price price to pay. And sometimes we miss a bit of flexibility. And that's basically due to the fact that each object has an identity. It's something that obviously is needed. We're not going to get rid of object identity. It enables mediability, polymorphism, and so on and so on. On the other hand, there are some use case where objects might not need identity. But still, today, they have to pay the price for that feature, even though those objects might not benefit from identity. So basically, Project Valhalla is looking at how we can improve the density of information within memory. And the thing that the team is looking at is basically how we can declaratively say that, okay, for that type of object, I don't need that object to handle an identity. It's not something that can be done automatically. So that will involve some help from the developers. So the developer will have to specifically say, okay, for that type of object, I don't need identity. And then the VM will be able to improve how those objects will be stored into memory. And the VM will be able to increase the density, the in-memory density for those type of objects. Another project is Project Loom. So if you look today, they are, well, I will simplify a little bit, but there are two types of programming approach. So you have a traditional blocking approach. So it's very easy to program to develop with. The thing is that, well, so it's very easy to develop. It's also very easy to debug. The thing is that that approach doesn't scale. As soon as your code blocks, well, basically your code is waiting for something to happen. So you are using, well, you are blocking resources. That's not very effective. On the other hand, you can go for a model that is more geared towards a reactive approach. The thing is that developing reactive application, well, that's on one hand a very difficult model to program with. And more importantly, that the code you write is very difficult to debug and hence to maintain. Typically, if you try to debug a reactive application, well, you see that something, you have an issue here, but in fact, the issue is not really happening on that, well, in that region of the application, but it's happening somewhere else. But it's very basically, it's very difficult to do correlation between an issue and where it happened within the flow of the code. But still, if you want to scale, if you want to have efficient resource use of, you need to go toward that approach, that approach. So Project Loom is basically trying to solve that by making concurrency simple again. Right now, the GVM is using native threads, kernel threads. Loom introduced the notion of virtual threads, which are basically threads that are managed by the GVM. So those threads are some kind of, well, virtual threads, software threads that are managed and handled by the GVM. And obviously, the GVM will have to do mapping between those virtual threads and some underlying kernel carrier thread. But that is handled by the GVM. And the thing is that those virtual threads are very, very cheap. So it doesn't, it's not really an issue to write code that is blocking because the virtual thread that is blocking is not blocking an actual underlying physical threads. So basically, you can write application using virtual thread, your code can blocks, but you don't have to pay off resource underutilization. So those threads are so cheap that, well, you don't have to pull those threads. So you just block the thread and you start a new thread. It's not an issue because those threads are very cheap. So that's basically what Loom is trying to solve. Now those days, we have multiple early access build of Loom. If we look in the platform, there are already a few JEPs that have been added in the platform for Loom. And more specifically in JDK 14, we have re-implemented the legacy SOCAT API. That was something in preparation for something that is coming in JDK 15, which is re-implementing the legacy that's that diagrams SOCAT API, which has been done in preparation of Loom. So another large project is Panama. So the goal of Panama is basically to enrich, to enrich the interaction between the Java virtual machine and a foreign native code. Historically, we had GNI for that, Java native interface, but GNI has been specifically designed in the early days to be, I would say, not friendly to use. We want to basically provide an alternative to GNI where it is easy, safe, and efficient to use native code from Java. If we look at the deliverables of Panama, there are three main deliverables, the foreign memory access API, which is in 14 incubator. So that's something that you can use already today. It basically allows to efficiently and safely use that is not on the Java EAP, but you can access that memory from Java code. Then there is an extraction part. It's basically the ability to extract from C native header files so extract interface and generate binders that you can use directly from Java code. There are two parts for the extraction part. There is a tool that mechanically do the extraction, but there's also an API that a more advanced developer can use to meet more advanced scenarios. And last but not least, there's the vector API, which allows to easily express vector computation that will compile at runtime and execute on a vector, on CPU that support vectorize extension, such as SSE or AVX for AMD or ARM scalable vector extension. And the vector API is right now in incubator candidate. So we don't know yet in which JDK release it will be added. So today what we have for Loom is the foreign memory access API in incubator. That's something that you can use. And we also have early access build for the extraction part. And then also put this in the Panama part, even though that specific JEP is not really part of Panama, but given that it's very close to the hardware and, well, I put it here. So JEP 352 basically add the ability to manage non-volatile memory via byte buffer. That's something very specific so I'm not gonna discuss about that anymore. So what I'm gonna do now is a very small Panama demo. So it takes a bit of time to switch. So let's see. So I hope that it's big enough. So what I have here, I have a very simple Java application, but first I'm gonna go, let's see where is it? No. So I'm on OS 6, this is a Linux. So as any Linux, no, I don't want to do that. I forgot the name, sorry, let me check. Okay, this, oh, sorry, too small. No, it's read line. So this is header file that is part of OS 6. So this is a real line library that basically gives us a read line support, something very basic. So there are multiple function and what I want to do here is use one of the read line function from Java. So the first thing that I need to do is use the extract tool to basically parse the read line header files to extract all the information and then generate the binder interface for that. So I'm gonna use these extract tools. So, and I specify here that this is the pass, for example, of the library, the read line library. This is the pass of the header file that we want to extract. And what I want is that I want the outcome to be in that jar file. Obviously, I'm not on the right version. So let's see, I'm on 14 and I need to switch. So I'm gonna switch to a specific JDK build that support Panama. And it's 14, I think, oops, Panama, yes. So let's invoke extract again. Okay, a bunch of warning, but this is early access build. Okay, now I have this jar that has been generated. So what I want to do is basically, I have this small Java application. We can very quickly go over the code of that application. So a scope is something that is provided by the foreign memory, yes, the foreign memory API of Panama. And basically, scope are used to enforce liveness checks on scope resources. So it's basically some sort of memory, but that we will allocate on the other side. So on the native side, we need to enforce liveness because whenever we allocate on the other side of the fence, so on the native side, obviously at some point in time, what has been allocated needs to be deallocated. So that's why we need scope. So I create a scope, then I create a scope then from that scope, I allocate a string. I pass in the string name. So we're on the C side. And then I'm just invoking that function that is coming from the redline library. Using this pointer that is defined here. So basically I'm just passing this string to the native function. And what we get in return is a P object, which is a pointer. And then I'm just displaying that object. So this is a two string method invocation so on the pointer object. And then I use this static method to basically get the content of that pointer. So let's compile that. So just see, let's see a class pass. I specify the jar, redline, and then source. Okay, now I need to run that guy. So name, so this is basically this line here where we invoke the redline. So this is the COS 6 function that is invoked. I pass it something, test. And then the result is the following. So the type is abundant pointer. So this is this line. In fact, the two string on that P object, which is a pointer. And then, so this is all the results, all the two string information, all the information that we get via the two string. And then the last thing is this test, which is basically we ask the foreign memory API to give us the content that is pointed on the other side of the fence, but by that given pointer. So this is in a nutshell how Panama works. So if you have any question, you can use the chat or we can discuss at the end. It's up to you. So moving on, another big project is a project amber. So the goal of amber is basically to continuously improve developer productivity through the evolution of the Java language. So it's not something that happened through one single release. It's something that we have started around a Java 10. And since Java 10, we have added new features which are basically emerging from project amber. Var is a big one. So variable type inference was added in 10. In 14, we're adding switch expression. It becomes a standard features after to run off preview. We're doing another round of preview for text blocks. We are introducing records. We're introducing also pattern matching with instance of in preview two. So I'm going to discuss most of those features. So the first one is a record. So records basically give you the ability to... So they are providing a very compact syntax for the client class which are data holders. So basically a tuple, an imitable tuple in which you can store data and that's something that you can basically pass around. I'm gonna show you how it works. So you will see that, well, it's something that is on one hand very simple but on the other hand very efficient. Something else that is coming from amber is text blocks. So basically text blocks are multiline string literals. So take the following example. So you want to store the HTML that we have on the left side. Typically you would do something in your code. With text blocks, you can now do that. So you basically keep the syntax as it is. You don't have to escape anything. This is something which is very convenient for dealing with XML, JSON, YAML, SQL, and so on. But again, that's something that we will see in a demo in a minute. Then we have a pattern matching. That's something that we will see in the demo. It will be more clear. And finally we have a new switch expression. So let's go to the demo that will be more concrete. So let's see. Well, IntelliJ has already started. So I'm going to very quickly create a new project. I need to configure my compiler because I don't know why, but by default it generates Java C5 byte code. And then I also need to configure my project. And I'm going to increase the font for the code. Don't worry. So here, given that I'm going to use, oh, sorry, it's not here, it's here. Given that I'm going to use some preview feature, I need to specifically tell to IntelliJ that I want to use preview features because preview features are not enabled by default. And I also need to do it, let's see here. So again, I'm going to use 14 preview features. Okay. So let's just run that and see if it works. Yeah, it works. So the first thing that I'm going to show you is records. So let's create a record. Let's say that we want to create a record for a person and a person has a name and a person has a first name. And that's it. It's all we have to do. Now what I can do in my code is the following, for example. So I can create a record and today's speakers is the Labassee David. Oops, oops, sorry. It's obviously not a record, but a person. Okay, so now I have the speaker object that I can use. So for example, a speaker. So let's run that. So this is the result. So this is basically the two string method that is invocated on that speaker object. So let's have a look at what has been what we are here. So target class, we should have two class. So we have the test class and we have this person class. So this is our record. So if we look at the record itself, so just by this simple declaration, we see that we have a few methods that have been generated, so the class person is final. It extends Java language records. It has a few methods. So it has a constructor, it has a two string, it has a default hash code methods, it has an equals, and then it has two getters, last and first name. So that means that I can, for example, invoke one of those, and we use one of those methods and you see that this time we have the Labassee instead of the two string. So what we can do if we look at the record itself, obviously this is the default behavior. I haven't specified anything. I can define my own constructor. So person, I can say, for example, that let's see, we want to uppercase all the names, right? So let's run that. So you see that, oops, the Labassee is now in uppercase. And you see that the only thing I had to do is basically I had just have to specify what needs to be done with one of the field. I didn't have to specify anything for the first field. So it has the default behavior. But now if I'm doing something like this, for example, if let's see, last is blank, then this last equal, no, nemus. Here the compiler will complain because last might not have been initialized. So if we go through this branch, of course, last is initialized, but that means that if we have a else, it won't be, so if last name is not blank, it won't be initialized. So to solve that, basically we need to add a, in this case, a health. And so else this last equals last. In this case, we have a default, a second branch, and in this case, it works. So if I have a blanks, let's run that. And you see that this time, oops, sorry, it's the first name is an animus because it is blank. So that's basically how a record works. The next thing, text blocks. So let's see, for that, I'm gonna, yes. So this is the POM that has been generated. So it's just some XML. So it's the POM from my project. And well, I want to create a string for that. So for that, I'm gonna use a text block. So for that, I'm use the triple quotes and that's all it takes. Now I can do, so POM South, if I run that, well, you see that the text blocks has been, is correctly, including the indentation because here I basically copy the HTML as it was. So what I've done here is basically I kept the indentation but obviously we probably don't want to have all those space in front. That's something that text block handles for us. Something else that we can do is, for example, the following with text blocks. So they're not able to evaluate the expression, but well, let's just do that. So I added a person test here and now I can use formatted and let's see. And what I want here is the speaker first, for example. So let's run that. Yep, sorry, typo. It's a method. So if we look at the HTML, we see that here, well, it's not class, but it's the first name, but you get the idea. So basically we can do some kind of chip expression interpolation using formatted with text blocks. So that's basically how text blocks works. Now let's move on to another features of Humber and that's pattern matching. So pattern matching. So you see we have this speaker object which in fact is a person. So that means that we can do something like speaker first, right? So we would get David, but it might happen that the type of that object is of that, yes, of that speaker would be object. In that case, obviously we cannot directly invoke the first method. So what we would typically do is something like this. So if speaker instance of it's a person, so if it's a person, then what we do, we create a new object. So it's a person object, x equal to a speaker, but we have to specifically cast it to a person type. And here obviously we want to use the x object. So that works. So the thing is that here you see that if we have this type, we create an object of that same type and we cast it from the other object using that same type. There are a lot of repetition going on. What we can do now with pattern matching instance of in a JDK 14 is the following. So we declare the new variable here and that's it. So if you run this, we'll get the exact same behavior. So hello, David, this is the first one. And hello, David, this is the second one. And you see that this one is more succinct. So it's more obvious to use. So the last one, and I'm probably gonna skip that one because it takes a bit of time is the future expression. So it's a new type of switch expression that works like this. So this is on the left side. This is how we traditionally works. The thing is that, for example, here there's a bug in the sense that there is no break here. So if I do a switch on, let's say Friday, well, number of character will be six and then it will be seven. So that would be the end result, seven. In this case, I'm doing a switch over an enumeration. So all the values are no because we're an enumeration, but still, if I miss, for example, one of the day, the compiler will not be able to tell me, okay, you are not evaluating a witness there, for example. So that's why we have a default. But given that we know all the value, the default is a bit, well, it's a bit hard to have to use a default value. So what we can do, oh, there's a question. Will we be able to use record as GP entities? Not directly, something that, well, you can use records with GPA, but the thing that you have to keep in mind is that records are immutable. So you cannot change any fields as soon as it has been created. So let's go back to the switch expression. This is the new switch expression. It basically returns a value directly. So something we were not able to do in the former switch expression. So that's why this is on the left side, the switch statement, and this is an expression because it returns a value. So we have all the case. Given that, again, we're doing a case on enumeration, the compiler will tell us if, for example, we're missing a day. So if we're missing a day, either we had this day or we have to deal with the default value. If we're dealing with all the days and if we have a default value, the default will never be reached. So that's something that the compiler can also infer that the default branch is basically a dead branch, those kind of things. So that's in a hotel, the switch expression. So I'm gonna move a little bit because I'm a bit ahead of time. Oh, sorry, shouldn't my bad. I switch the slide. I should then switch moved out of the slides. So the Ember demo, we have seen the Ember demo. So those are some of the project that the more ambitious long-term project that we're working on. What we see is that gradually some of the features emerging from those projects are added in Java. So we've seen the foreign memory coming from Loom, for example, we've seen how Humber has added multiple features or since Java 10 in the Java platform. But obviously the Java platform is not all about a huge and ambitious project. In 14, we have this new helpful new pointer exception, JEP 358. So we have all seen that kind of code where we basically have new pointers. So it's not a big deal because we know where the new pointers happen, so line 666. So we just have to look at that line, right? The thing is that line might look like the following. So we know that it happened here, but we really have no idea where it happened exactly. That's what the new pointers, the helpful new pointers option gives us. So that's something that you have to explicitly enable in 14 and now you will have something like this. So it cannot involve the city get district because the return value of get city is null. So it basically give us more information to help us to pinpoint the exact issue that raised that new pointers. So that's something that is available as a standard feature in 14. It's not enabled by default. And I've seen some discussion where in 15, it might be enabled by default. So it's a small features, but it's a very convenient features. JDK Flight Recorder is something that is available, I think since Java 11. And the basic ID of JDK of Flight Recorder, GFR, it's a black box that keeps track of events that are emitted by different components within the JDK itself. So it can be the GVM, it can be your code. So a bunch of events are raised and GFR will keep track of them. And that's something that you can use after the fact to do some kind of analysis. The thing is that GFR is very low overhead. So that's something that you can use in production to basically detect and pinpoint specific issue. Something that we're having in JDK 14 is event streaming. So until now, the way you were using GFR was the following. So you start the recording of your application. So you use your application, then you stop the recording, you dump the content of the event to repository, and then you process those events. Now, an application that is running as the ability to stream out an event as they happen. So you can have some sort of sidecar application to do analysis of the event as they happen. So there is a specific API to do that in JDK 14. The thing is that obviously we're improving GFR through all the release. So in 14, we had 145 different event types for GFR. And in 15, I've checked in the, well, it's not the latest build because we have done a 22 build yesterday, but in the build of last week, we had 157 GFR event type. So it keeps GFR in itself, keeps to have more metrics within the platform that you can use. And not only that, you can also write your own custom and event type for your application. Something else that is part of JDK 14 is this new packaging tool, JEP 343. Right now, it's still in an equation phase. So the idea of that tool is, it's a tool that gives you the ability to create native installer specific for a given platform. So on Windows, you will either have an MSI or an Excel file. On macOS, you will have a package file or a DMG and so on and so on. It has a lot of additional, I would say, native features such as the ability to pass parameters to the native executable that will invoke, that will do the installation of your Java application. It obviously work with jailings and so on and so on. So that's something that you can use today to basically create native installer for your Java code. Now let's quickly look at Java 15. So that's the schedule, nothing will change on that front. So this is a table that I did well, I did it yesterday and I checked this morning, it was still up to date. So those are the multiple JEPs that will be part of 15. I'm gonna discuss specifically two JEPs that is Eden class and where is the other one? SIL type, SIL class, it's not here. Yeah, I don't, oh, yeah, I miss one. Well, anyway, it's on my slide later on. Oh, yeah, it's here, sorry, it's it. The other one is SIL class. So this slide shows the JEPs that are either integrated. So already in the early build of JDK 15 or they are targeted, so then, sorry. Targeted means that we intend to add them to 15 or we propose to target them. So that's basically the first step before we integrate them. So we tell the community, okay, we want to add that to 15, is there any objection? If not, it will move to targeted and then when the works is done, it will move from targeted to integrated. So basically, there is a very high chance that all the JEPs here will be part of JDK 15. Just keep in mind the small disclaimer that we might find a big issue in one of the JEPs here and we might decide to remove it at the very last minute. So there's always a risk. And then there are a few other JEPs that are currently being worked on and we don't know yet if they will be part of 15. So one is SIL class and well, there are a bunch of JEPs but I just took three where clearly there are a lot of activities going on right now. So maybe they will be part of 15. We will see in a few weeks and at worst by early June, we will know for sure if they are part of 15 or not. So SIL class, foreign memory access, second incubator. So that's part of project loom and then the vector API, which is also part of project loom. So I'm going to very quickly discuss hidden class and SIL class, which are often confused. So a hidden class is something that has been specifically done for frameworks developers. So if we look at frameworks, they have this habit of dynamically generating classes and use those classes through reflection. The thing is that those classes, given that they are generated, can potentially be used by a hotter slash external bytecode, something that we clearly don't want. So basically the ability of hidden class gives now frameworks developer the ability to still generate dynamically classes on the fly, but those class are hidden for the rest of the world. So only the frameworks that generate those class will be able to use those classes. So it targets frameworks developers. But if we look at Java C, for example, Java C is also using that techniques, for example, for Lambda expression. So that's something that is also useful for Java itself. And one of the things that will be done in addition to creating this new facility is also deprecating the sun, misconcept, define anonymous classes that is used exactly for doing that, dynamically generating classes. So if we look at the properties of those classes, well, in terms of discoverability, they shouldn't be basically discoverable by classes outside of the classes that has created that classes. In terms of lifecycle, it should, those classes should be able to be aggressively unloaded to give the frameworks the ability to generate a lot of classes. And as soon as those classes are not needed, they will be automatically garbage collected. Now they can be garbage collected through a more traditional approach. That's something that is, the behavior of the GC is configurable for hidden classes. But what we know for sure is that the aggressive unloading is something that is needed. And then access control, that basically gives us the ability for a class that creates dynamically another class to access that class. But that also prevent other classes, external to that classes, to access that newly created class. So that's basically in shell hidden classes. It's not something that you're gonna use that most of the developers like you and me will use. It's clearly something for frameworks developers. And then there are seed class. And some people tend to confuse the two. They're completely different. So before we talk about seed class, we need to quickly talk about inheritance. So inheritance is something that encourage code reuse. So we basically have a class hierarchy and all the class, for example, a class that extend the class can reuse features from the class that it extends for. Now, the thing is that the class hierarchy is often used for code reuse, but sometimes it's used for something completely different. So the class hierarchy is sometimes also used to model different possibility of a given domains. So for example, we want to model the different shape that are supported by graphic application. So we would have a shape, a class or interface. And then we have subclasses that would extend that shape classes like a square extend the shapes, an hexagon extend the shapes and so on. Or we can have a class that represent the type of vehicles that we sell and then we would have, I don't know, a CV, a coupe, a sedan and so on that extends that super classes. So that's something that is completely different. And the problem is that right now, if you have this shape super class that is extended by triangle, circle, hexagon and so on, you cannot prevent any other class to also extend it. And that's something that we would like to address with a SEAL class. So SEAL class would basically allows to have a given class hierarchy that is bounded. So the class hierarchy can only be extended by class within that limited closed class hierarchy and not by any external classes. And obviously code reuse would still be possible, but it wouldn't only be possible within domain, within the boundary of the closed class hierarchy. So how does that work? Well, let's have a look at an example that would be more of use. So I have this shape super class and we have two new keywords. So we have SEALD that is used to basically say, okay, the class shape is SEALD and then we have the permits keywords that tells which class or interface can use that super class. So SEALD is, sorry, shape is SEALD and only in this example, circle rectangle and square are allowed to extend the shapes. And then there are a few variations, like the fact that if you are within the same package or the same modules, you can just use the class name. You don't have to use the full package name. If you have nested class, so all the classes within a single file source, you just seal the super class and by default, all the classes that are presented in the same source file will be permitted by default to extend that super class. So that's basically how it works. Now it's not clear today if SEALD class will be part of GTK 15. If not, that's not really an issue. Just mean that we'll have to wait another six months and it will be for GTK 16. So I think it's time to wrap up. So today we've discussed about what new features will be added to Java in 2020. So GTK 14 was released two months ago and GTK 15 will be released in four months from now. I've also discussed very quickly and how those large ambitious long-term projects are basically gradually adding capabilities into the platform with the ultimate goals of revamping completely the platform. Now, I need to quickly discuss another new project that we have just announced two weeks ago and that is Layden. So Layden basically tried to tackle some of the pain points of Java and that is the slow startup time, the slow time to performance and the large footprint of Java application. And everything is relative. When I say slow, it's slow compared to typically native Go applications, for example. Footprints, it's again compared to native application. So basically, Layden tried to tackle those pain points by introducing the concept of a static image to Java. Something similar to a Grail VM native image. Layden aims to leverage existing components of the GTK, such as Hotspot, AOTC, which is an AOT compiler that we have an experimental form since GTK 9 into the platform, but also CDS and J-Link. And clearly, this is just the early days of Layden. We have just started to gather interests around that project, but we think that over time, this is something that will be important. So basically trying to bring the capability that we have with native VM, then similar capabilities, but directly into the Java platform. So these slides depict the content of GTK 14. I think that we have discussed most of the JEPs that we have here. The only thing that I need to mention, and I think I've mentioned that at the beginning, is that JEPs are also used to remove holder things from the platform. So you see that, for example, two and three, six, three are deprecating things from the platform or removing. So when we remove something from the platform, we first deprecate to tell the world that, okay, that features will be removed in the futures, and then later on it will be actually removed. So it's a phase removal approach. And GTK 14 is available. So you should download it right now and give it a try. So in terms of conclusion, Java is still free. We have put in place everything to deliver faster, and we are delivering faster, and we haven't had a richest future pipeline for the Java platform ever. And also keep in mind, this year marked the 25th anniversary of Java, and given the pipeline that we have, well, we can clearly say that there's a bright future for Java developers. And with that, I'd like to thank you for your time, and I don't know if we still have time for questions. Partly we have time for one question. Someone has a question. You can also ask questions on Twitter. So there's my Twitter handle there. So if you have any question, and you don't feel like asking now, you can also ping me on Twitter. And thanks, Rini, I have a question. Yeah, so I just had a question regarding the project, so there was the last thing I was actually looking into is the problem of type erasures in the type erasure in generally. So that is a real, that's a more complicated problem in the project. So what is the state of it? And if there is that mountain climbed, and then when it's expected to deliver it? So you know Brian Goetz, the Java architect. So Brian is the architect of the Java platform. And basically his answer is it will be, so whenever someone asks about when it will be delivered, his answer is pretty easy. It will be available when it's ready. And the fact that, well, it's not yet there, means that well, things are still being worked on. So I cannot give you any more precise answer than that. But yeah, you're right. Valala is really a fundamental change within the platform. The nice thing is that with the new release cadence, well, we don't have the issue that we had in the past where we basically had a time window to add new features in the platform every three years. So if we miss that time window, that basically means that we have to wait another three years to add that into the platform. Those days we can gradually add features every six months. So we can expect in the future release to come to see some features emerging from Valala. But I can't give you any more precise answer than that. I'm sorry. Thank you David, thanks a lot for your presentation. Thank you, thank you, thank you for your time. Okay, thanks everyone for joining. So see you next time. Thank you. Thank you, bye bye.