 Okay, thank you everyone for joining us. Welcome to today's CNCF live webinar, GraalVM native image, low footprint Java in the cloud. I'm Libby Schultz and I'll be moderating today's webinar. I'm gonna read our code of conduct and then hand over to Chris Foster, principal product manager at Oracle and Eli Schilling, head of developer relations content at Oracle. A few housekeeping items before we get started. During the webinar, you're not able to speak as an attendee. There's a chat box on the right-hand side of your screen. Please feel free to drop your questions there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. And please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They are also available via your registration link you use today and the recording is available on our online programs YouTube playlist. With that, I will hand it over to Chris and Eli to kick things off and take us through. Yeah, thank you very much. Hi, my name is Chris Foster. I work at Oracle Labs for the GraviM team. I work in product management. So I create a lot of content and do a lot of talks about GraviM native image. It's a technology some of you may have heard of, I guess, but hopefully in this session we'll go into what GraviM native image is and what some of its benefits are. Okay, so what is GraviM and why should I care? This is gonna be a mix of slides and some live stuff, some live coding and some deploying things to the cloud. So I'm hoping there won't be too many mess ups along the way, but bear with me if there are. So what is GraviM? It's a number of things. It's a Java runtime. So we have JDK 11, JDK 17. You could just think of it as a drop-in replacement for whatever Java runtime you're using at the moment. And the benefit of it, certainly in the enterprise edition, is that we have a new, an entirely new JIT compiler. So it's entirely written in Java. It's very fast, very efficient. And it's got, it can make your application significantly faster if you're using the paid enterprise version. We're not gonna talk about that today. Another feature, which is the one you may have heard the most about is native image. This is where we take an application, your application, and generate a native executable from it. And by native executable, I mean a binary that will run natively on whatever platform you're building on. So if you're using a Mac, you end up with a Mac binary executable file, a single file that you run and has no dependencies on the JVM. So it's very easy to distribute. It's ideal for containers. So if you're building on Linux and you package this binary up in a container, you can get a very small Docker container, for instance, with your application in it. So that's gonna be the main focus of this talk, but we'll talk about some of the other things that GraVM is before we move on to that. So it's multi-language. We've added an API to the Java runtime, to what to the JDK, that lets you run many languages on top of the GraVM Java runtime. So you can run currently Python. You can run Node.js and JavaScript. You can run R, C, C++. Lots of different languages. You can run them natively. There's no sort of cross-compilations. So you don't end up basically using some kind of compiler that generates bytecode that then run. You have an interpreter for the language that you can install into the JVM. Very interesting, but we won't talk about that today. And finally, it's an open source project. So we have a free, totally free community edition, and we have an enterprise edition, which is paid for support basically. There is a number of optimizations, performance optimizations you get with that, but basically it's at the performance plus support. But the community edition, as I said, is free to use and is developed in the... So why should I care? I've covered some of these points already, but really, if you're thinking about Graviem as a Java runtime, it's improved performance. If you're thinking about native executables and you want to be able to build these fast-starting, low footprint native executables, then the native image part of Graviem is the thing that's interesting to you. And that's what we're focusing on today. The best way to do it, to explain this really is if I try and show you some examples. So we're gonna sort of iteratively build up some applications and then look at how we turn them into native images, how we deploy them and kind of compare and contrast the kind of, the performance we get from our Java version and our native executable. So we need a basic app or like a model play thing to demonstrate these ideas. So I've got a Java application that uses something called a Markov model to generate random text. So I've taken the Jabberwocky poem and I've wrapped it up in a Java class and I'm using a library, which we'll see in a minute when we look at the code, RaiData, that lets you generate random text from an original piece of text. So it's a relatively simple application, but it's kind of fun and it has various moving parts which are great. We're gonna build it into a spring application with the REST API that will allow us to call the URL and get back our nonsense verse that resembles the Jabberwocky poem. And as I said, we're gonna build initially a Java version of the application and we're gonna look at that now. There's a little note on metrics. In order to compare and contrast like our native application and our Java application, we need to get some figures about performance. And in order to do that, I'm using the Spring Framework actuator like the Spring Boot actuator, which gathers statistics and facts about your running application and is able to serve them up on a URL. And then I'm using Prometheus, which is like a time series data engine that lets you grab those, grab, sorry, it calls those URLs, those actual screen scrapes like the information from them and stores them in time series. And then at the very end of that, I'm using Grafana to pull this data together to make nice visual dashboards which we'll see in a second. So let's look at the code. So I'm gonna jump to my code editor. And here you are. I've got my Spring Boot project. Let's take a look at the Java application. It's very, very simple, like I said. We've got a basic Spring Boot starter class like a main class. We have a utility class, javawaki.java, which contains the main logic of our application. Here's the text for the poem, the javawaki, I mentioned. We build this Markov model. This is basically just a model of the text and then we store it in this utility object. This object is created as a singleton so there should only be one of them running inside our application. And whenever we want some nonsense first, we're gonna hit some methods in this class, generate or generate with a number of lines that's gonna query that model and using a string buffer, build up a number of lines of text for us and then return that to us. So that's our simple, that's the functional part of the application. And then we need to wrap it in a controller. So we have very simple Spring controller here. It's a REST controller. We've given it a URL to listen to, slash jibba, to bind to. And we have some methods that let us, that basically serve the content for that. So if we just call slash jibba, we get our model and we generate some text and return that, but we can also optionally ask for a number of lines of text, 10, 100, 200, whatever. So that's our Java application, very simple. We can build it from the command line. So I'm in my terminal on my server. So if I do maven clean package, it's going to build the application. And you'll notice in a second that it also builds a Docker container with the application in it. I'm using the Spotify Docker maven plugin to automatically generate a Docker container for me, a Docker image from a Docker file. So anyway, we've built our application, we've built our container. And if we look in our target directory, surprise, we've got a jar file. So let's run this and we're going to put it into the background and we'll call the URL and from that benchmark. So the application is starting up, you can see the status spring output. And yeah, it started up in about two seconds. So now I've got my application running, I can use curl to hit the endpoint. And with any luck, I should, if I could type, I should get some nonsense first back. Brilliant. So we saw some logging there from spring, but here we have some text that was generated by our custom application. And if I call it again, I get slightly different text back, beware the Jabberwocky, this time it starts with, what does it start with actually? No, it starts with and as if, and before it was starting with, where was my call? Yeah, and has now. So every time I call my URL endpoint, I get a new like piece of nonsense first that's modeled on the Jabberwocky poem. So my application's running, we've got an idea of what it does and we can see that it generates the text that we want to. So I'm going to bring it back into the foreground and kill it. I also said that I'm using Maven and the Spotify plugin to generate a container for me automatically. So if I call Docker images, yeah, I can see that I created a container and I can run that container and that will also do exactly the same thing. But let's run the container just to show that the app is now successfully packaged into my Docker image. So I've got to map some ports to make it available locally. I'm just going to copy the name of the repository. It's got a very long repository name. The reason for that will become clear in a second and then I've tagged it to describe which version of the app it is. As we go through this, we're going to build different versions of the application. Ah, am I already running containers? Okay, no, I know what I'll do. We'll call it Jibber JDK1. I'll sort that out in a second. So my application started. So now if I were to hit curl, I get nonsense text back. Ah, typos. Okay, so we've built our application. We've tested it from the command line. We can run our jar using Java and we see it starts in about two seconds, returns as our text. We've seen that it's packaged up in a Docker container. Perfect. And we can also quickly look at the size of the jar, et cetera. So this is a little script I've written that generates a bar chart that shows you how big the jar is and how big the JDK container is. So my jar's 21 meg in size, but my Docker container containing my Java runtime and my jar and everything else that Java needs is about 600 meg in size. So that's not very slimmed down yet. We could make that smaller using a slimmer container and using J-Link to build a slimmed down JVM. But to my eye, that's, you know, there's not a ridiculous size container for like an application packaged, a Java application packaged as the Docker container. So about 600 meg for the container, about 20 meg for the application code. So we're gonna switch back to my slides. So we've seen the application, we're seeing that it runs and now we're gonna get to the topic of native executables and containers. So we talked about some of the benefits of like native image and these native executables previously, but just to recap on that, I mean, why do we want to do this? We want to do it because we want our applications to start faster in the containers and we want them also to have a smaller footprint. By smaller footprint, we can mean a number of different things. One would be the container size. So the less in the container, you know, the better you have to store these, right? The less that's in the container also arguably like the more secure the container would be, faster starting means that, you know, you're more likely, you may be able to scale to zero and then bring instances up into to handle calls to request, but it means it's easier to scale the application dynamically. Another area of another area we can talk about the footprint as well is the memory consumption of the application. Obviously in the cloud, you pay for everything. So if you're using more memory, you know, to run your an instance of your application, the more memory you use, the more expensive it's going to be. So if you can use less memory for the same, for the same throughput, the same performance or very similar performance, then that might be interesting. So I've got a little graphic I built here earlier today to talk about, to help people visualize what native images on the left, we can see we've got some Java files. What typically happens is you compile those with Java scene, you get some class files and then you run those class files. You might package them up in a Docker container, et cetera, but at some point you're going to say, Java class path this or jar this and actually call the main entry point for that application. And that's what we would call like the JIT way of running your Java code. You take a Java runtime, your class files and you run the class files on the Java runtime. What we're looking at here is ahead of time compilation, which adds another build phase in. So we take the class files, we run a tool called native image against them and that generates our single native executable, our single binary. So that's AOT or ahead of time compilation. You can do this by hand, you can use the native image tool directly from the command line, but you don't need to support for that. It has been available inside Maven and Gradle for quite some time now. The Graph VM team has a native build tools project that provides support for Maven and Gradle. There's plugins for Maven, for instance, which we will look at today. And these help you automate the process of building your native image. So you don't need to think about that. You add some configuration to your Maven POM file and from that, you can then automatically generate a native executable of your Java application. So two things to think about. You've got the approach to running your application. You can have Java, inner container and your class files. And then we have this extra build step for ahead of time compilation, where we build a native binary and then that is the thing that we deploy, we run, we package. So I'm using a spring app. So it's probably important to talk about the spring native projects now. So the spring native is an experimental project that's working to make using spring apps and turning them into native executables very easy so that you don't have to do anything. So if you've got a spring app, sorry, you can add this dependency, add the AOT plugin to your POM file. And these plugins and this dependency will automatically solve all of the any issues related to making the application, the spring parts of the application work with GraviM native image. So for every spring component you use, the AOT plugin is gonna generate required config and a few other things to make that seamlessly work with GraviM native image. This is gonna be part of spring boot three very soon. I haven't updated my application to work with that yet, but when spring boot three is released, you won't need to add these at all, I don't believe. I think you can just have a spring boot three application and you will be able to like ask for the application to be packaged as a native executable. I think certainly in Maven using profiles. So we've looked at what native image is. We're gonna jump back to the code. We're gonna have a look at how we take that same application that we just wrote and turn it into a native executable. I'm not gonna need to change the application in any way. My Java application is gonna stay the same, but I'm just gonna change how we build it and deploy it. So if I switch back to my code editor, we'll look at the POM.xml. I'm just gonna resize this window. So I've added a profile to the POM.xml and Maven profiles allow you to have specific configurations for build. So in this case, when I type Maven package, I want it to build my jar file and I wanna package that as a Docker container with Java in it. But if I call the native profile, I want it to package my application as a binary executable for Linux and I want to package that binary executable into a Docker container. So if we take a quick look at this Maven profile, we've added a new plugin here. This is the native Maven plugin that I talked about. This simplifies using native image from inside Maven. There's gradle plugin as well. It's very easy to fit in. You can pass some configurations. So for instance, I'm using a property in my Maven file to say what the output executable name should be. And we can also, and this is quite important, pass in extra flags to native image with this build-up parameter. For instance, I'm asking it to give full exception stack traces if anything goes wrong, always a handy thing when you're trying to debug something. And I'm also passing in a flag to tell it to create a mostly statically linked executable. Because we're generating a binary executable, we have the ability to statically link it. What does that mean? It means that any system libraries or any libraries basically that the application needs outside of the JavaScript can be statically linked into the executable. They can become part of the executable. So theoretically, if you build a fully statically linked executable, you can deploy it in a from scratch container, an empty container. The single executable will contain everything you need for it to run. That's today. And that is the libmuscle toolchain. I haven't got that set up on this machine. But a nice kind of halfway house is that you can statically link all of the other libraries apart from glibc. So in most Linux distros, glibc is the standard C library. It isn't suited to being statically linked. Certain parts of it rely on it to be dynamically linked into the application. There are some bugs open on glibc relating to this. So if you use the mostly statically linked executable, you get a lot of the benefits and most of the libraries are baked in. And that also means that you can use a very small container image. So in my Dockerfile, I'm going to base that on distrelis. So I'm going to use a distrelis container to package my native executable in. Basically, because distrelis contains glibc, but not a lot else, some configuration, et cetera. But it's a very minimalist container. The less in the container, the smaller, arguably, the more securities, the fewer things you have in there that can be hacked. So that's the first thing. We're using the native Maven plug-in to make building the binary executable easy. And again, I'm using this Spotify plug-in to package that up as a native executable, package that up as a Docker container. Sorry. So I've written a script to do this. But really, all this is doing is just calling Maven, saying quickly, yeah, calling Maven, passing in a profile and a few properties. I want to tag my image differently. I want this one to be called taggedwith. So I know this is my native container. That's useful. So when I push to Kubernetes, I can start up an instance of the app that contains the native executable. And another instance of the app is based on the Java container. And I can have them running concurrently in my cluster and hit them with requests and then query them to see how they're performing. And I'm also able to specify a Docker file. So it's worth having a quick look whilst this is building. You can see that the output from the native image build is shown below, so almost down to stage five or seven. My native, sorry, that's my Kubernetes. My native Docker file is very simple. I'm basing it on distrelas base. I've got an argument that lets me pass in the name of the executable. I want to load, expose a port. And then I just copy that into, copy and rename, basically, that into the root. And then my entry point is slash app. So I could use this Docker file for running almost any binary application that I want to pass in. So we're just creating the image now. Native image is creating the image below. It's finished. It's been packaged up next in its Docker container. And when that's run, we'll just take a look to see if the Docker container has been built. Great. OK, so that completed without any error messages. I do Docker images. Yeah, so there's my native container, jibber native. So now I've got a native version of the application, packaged as Docker container, and a Java version. So I said the first thing it does is build the native executable. So if we look in our target directory, we can see here the jibber application. Oh, sorry. Such a bad type. There we go. So it is just a binary application. There's not. It has no dependency on the JVM at all. So let's run that just to see how it performs, and just to show that it can be run. So I'm going to run that binary executable, put it into the background. So it starts really fast, 0.04 seconds. OK, so it's finished, starting up in 0.04 seconds. And I can just hit the end point again. Oh, damn. My typing is really poor. Yeah, so we've got our application running as a native executable and returning jibberish text, which is what we want. So it also wasn't very difficult. We needed to change some things. We'll basically add some configuration to our POM file. And through adding the Spring Native plugins and the Graviem plugins to our POM file, very easily we were able to take the Spring Boot application and turn it into a native executable. So last time when we built the Java application, we took a look at the size of the Java versus the size of the container. And we can do the same now with our native executable. So this script returns the size of the jar, 21 meg, the native executable size, which is 82 meg, the size of our container with the native executable in, that's the third row down, which is 106 meg, and the size of our JDK container. So we can see our native container is already significantly smaller. So that's one of the one way of looking at the footprint, I suppose, of what your application is. How big is its container size? OK, so we're going to switch back to the slides. Hope you're all following along. No one's getting seasick with this switching from the editor to the slides. I thought it's probably worth having a bit of a recap. Again, we've built a Java application. We've containerized it. That was easy. Native app and containerized that. That actually wasn't that much more difficult. We've seen that the native application starts really quickly, and we've seen that the containers are smaller. So they've got very little in. So now we're going to think about how we might deploy this. It's kind of interesting to deploy it to Kubernetes so that we can look at these metrics that I talked about at the start. And so we can have a dashboard that compares and contrasts the application's performance. So we can see if using a native executable instead of the Java runtime to run our application is a sensible choice. So if we look at this diagram, over on the left, we've got two streams, basically. The top, we've got our native executable. We can see the diagram shows that that's turned into a container. And then underneath that, we've got our class files, and we turn that into a container. And then we push those to a container registry. So I'm using a container registry in OCI in the Oracle Cloud. I've also preset up an Oracle Container Engine cluster, so an OK cluster. That's just Kubernetes cluster. So that Kubernetes cluster is going to pull my containers from my container registry. I've pre-deployed to this Kubernetes cluster, Prometheus and Grafana, and I've wired them all up correctly so that we're able to go to the dashboard. And those dashboards know how to extract information, reporting information from the applications deployed to the Kubernetes cluster. And then finally, on the right-hand side, we've got an API that's fronted, like a service that's fronting the native version of the REST application. So we can call that, and we can get our nonsense text returned from the native version of the containerized application and the same for the Java. And into that cluster, I'm going to deploy some stress testing cron jobs that are going to hit both of those end points continuously so that the applications will be running under load. And when they're running under load, that's going to help us look at the metrics that we get on our Grafana dashboard at the end. And that will help us understand what the trade-offs are, whether it makes sense to run the application as a native application or as a Java runtime, or to offer Java runtime. There's a quick note on the stress test in the apps. When stress testing the apps, it's important to try and keep things fair. So the Java application, in my Kubernetes description, for deploying the application, I'm fixing the memory. I'm giving it 256 meg, but I'm allowing it to creep up to 512. The native application, I'm fixing its memory at 128 meg and not allowing it to increase. And I'm giving the Java application and the native application two cores to work with. So I'm trying to keep things fair. I'm giving the native application a handicap. I'm giving it much less memory. But I kind of want to, like, the native image can run with a smaller memory footprint. And finally, I'm using a tool called Hey to do the stress testing of both of the applications. So we're going to jump back to my editor, code editor, and terminal. And we're going to look at deploying scripts, look at the scripts that deploy the application, push the containers, and deploy my application to my Kubernetes cluster. So I'm going to just kick off the script that pushes the containers. I've done this earlier, so probably it's going to say that they're already there. So if I look at the push scripts, not really doing a great deal, Docker push to my OCI repository. And then I'm going to look at the deploy script. So the way I've written this application in the queue, I wanted people to be able to clone it and use their own repository. So the solution I've used is to put a template in for the container registry repo name. So that allows you to set this environment variable and for your own container registry. So your container registries are going to be the same as mine if you're deploying it to CloudX, Oracle, Cloud, AWS, whatever. And then I've used the msubs command to replace those templated repository names with the real name, which exists as an environment variable, and then pipe that into Kube control. So let me just check before I run this that my repo path variable exists. Yeah, OK, cool. So I can deploy my application to the cloud. Now this is going to deploy the Java application. So first it's creating a namespace to deploy everything to that already exists. It's then deploying the Java version along with the service to front it, the native version, and the same for that. And in my Kubernetes description, I can quickly show you these that it's nothing very exciting in here. I've basically created a specification for the deployment. I have given it a name, my container. So this is going to use my open JDK version of the app container. I've constrained the memory like we talked about, and I'm opening up a port. And then I'm also opening up the endpoint, the 8080 endpoint, to the outside world. And I'm using the annotations from OCI, the load balancer annotations, to automatically spin up the load balancer for me, which will handle incoming traffic. And I've done the same for exactly the same for the native image. The only difference here is that I've constrained the memory. Sorry, I could put it at 128, but I'm allowing it to creep up to 256. So we've deployed the application. We really want to see if my app's up and running. So I can use kube control to get my services in my namespace. This namespace contains just the versions of my applications, so my Java application and my native image application. You can see that there's three rows there rather than the two I've been talking about, because I've already deployed an enterprise, Gravium Enterprise version of the application. If I was going to talk about the JIT performance, that would be interesting, but I'm not. So let's hit that endpoint for the native version, the public endpoint for the native version. So I'm going to take the IP address for my load balancer that's fronting out the native executable. And I can call it, and I get some text back. And I can do the same as well for the Java version, except I've got rid of my IP addresses. Let's do that again. So this one's going to hit the load balancer for the Java version of the application. Yeah, 8080, and we should get some nonsense first back. Brilliant. So we know our applications are up and running. I've also set up a namespace. You know, I said that I had preconfigured Kubernetes. Sorry, in Kubernetes, I'd already preconfigured Prometheus and Grafana, so that I would have a way of grabbing information about the running applications and displaying it in a dashboard. So I've got my URL here for my Grafana instance. And if I jump to the web browser, I've already got it open. So let's run it and take a look at this dashboard. So there are three components to this dashboard, and I should explain what they do. So the first one is measuring the throughput. And it's measuring the throughput of the native executable running inside a container that we deployed to my Kubernetes cluster, and also the performance of the Java version of the application, the throughput, I should say, of the Java application running in a container. So I've got two containers, my native one and my Java version of the application. And I'm hitting both of those concurrently with a stress test request. So I'm hitting the URL for the Jibber endpoint and asking for nonsense first continuously. So both apps are under load, and we can see that basically they've got very similar performance. The yellow line on the top graph, the throughput graph, which is in numbers of requests per second, is the native images throughput. So the native executable is slightly, on average, slightly better than the version of the application running on open JDK. But we're guessing it looks to me roughly about 770 requests per second have been served by either application. Your mileage may vary with this. Obviously, your performance will depend on what the application is doing. Some applications, native image, the native executable will perform exactly the same. Some applications, it might perform slightly better. Other applications, it might perform slightly worse. But we're aiming to get the performance of the native executable that's generated with native image up to and approaching the same performance that you would get on open JDK. We've got some optimization tools that can really push the performance there. Won't have time to talk about those in depth in this talk, but I'll mention them a little at the end. So key thing about the top graph we're getting very similar performance. In fact, slightly better for native image. The next graph down is the... If I could interject really quick. We have a question in the chat. Now might be a good time to work this answer in. Are there any gotchas to be aware of given all of the improvements, i.e. reduction in size of the jar on the JDK? Yes, there definitely are. And that's what the spring native tooling is hiding from you. So I'll tell you what I'll do, Eli. I'll finish talking about this graph and then I'll talk about that topic, actually. I might actually just talk about that at the end because I've only got a couple of slides to go. But I think that's a very important point. So it's a very good question. Thank you for asking it. So the next graph down is the container memory. So how much memory, resident set size, is each container using? And in this case, really it would have been ideal if I'd kept the same colors for the same containers, but I haven't, I've just noticed. The GraVM native executable is in green. And that's using about 100, yeah, slightly over 100 meg, 116 meg in, 115 meg in resident set size. Java application is using 238. So we're using half the memory for very similar performance. The final graph is the startup times. Well, we knew that the native executable would start much faster because we'd seen that when we looked at the command line. But this information, again, has been pulled from the Spring actuator. So 35 milliseconds for the GraVM native executable to start up and the Java application, two to three seconds to start up. So realistic times for the Spring application, really what we would expect for a Java Spring application to start up in, second time spans one to two to three. And the native executable is starting up significantly under the 10th of a second. So I'm gonna quickly jump back to the slides. Again, this is just what I've shown you. Different ways we can look at the footprint, container footprint, the container's got smaller, but we can also think about the memory usage in the cloud when we've deployed that application. In this example, again, your mileage may vary. We've got the same performance for half the memory. And it wasn't that difficult to make a native executable from the application. So this is just one example. I think that's a key takeaway. Low footprint, if you're interested in low footprint, native images is definitely an interesting technology and can provide that for you. If you have long running applications, days, months, years, the GraVM Enterprise Edition JVM would probably make sense because you've got the JIT compiler in there and its performance is gonna be very, very good. So very long running applications where you don't care about the footprint so much, perhaps a traditional JIT mode would make sense. We've got different garbage collection, garbage collectors, I suppose you'd say, inside native image. In the Enterprise Edition, you can have Epsilon, Serial, GC and G1. G1 is our implementation of the G1, GC in Java. Throughput, if you really care about performance with native image, you can use G1 to ensure you get consistent latencies in responses. There's no stop the world pauses that you can see with our Serial GC. PGO, our profile guided optimizations. With that, you can really improve the throughput and performance of the application drastically. We talked a bit about static linking and finally it's supported by lots of different frameworks. Spring Native, that's what I was showing you here, but Micronaut, which is an independent project. Helidon from Oracle and Quarkus from Red Hat. All of these target, Graviem native image is a deployment target. They wanna build a native executable that starts fast and has a low footprint as a way of packaging their applications. So gotchas, I'll go back to the question that Eli asked. I think it's a very important question. Yes, there are native image. In order to get a lot of these benefits, native image makes certain assumptions. It operates under what's called the closed world assumption. So Java is a very dynamic language. You can load classes in your application. You can build up the name of the class programmatically. You can load resources off the class path. There's all kinds of, you can do reflection. You can find out, oops, sorry. You can find out about the methods an object has or what objects are available in a package. All sorts of things. It's a very dynamic language. Native image comes at this with the assumption that it's a closed world. Imagine that you can't do any of those dynamic things. You can't load classes dynamically. You can't use reflection, et cetera. You would think that you wouldn't really be able to generate many Java applications, but it turns out that you can. And the way we've been able to do that is through making, being able to tell basically the build time for native image about these dynamic features of the language. Say for instance, your Java program uses reflection to look up a certain class and to find out what methods are available. Well, if we know that that's gonna happen, we can build configuration files that tell native image that this is gonna happen. So when it builds, it knows that it happens. It knows it can add that class that's gonna be accessed reflectively to like the closed world that it's operating on. Same goes for class loading, you know, or loading resources of the class path. And that sounds like it might be a kind of a laborious process to build all this configuration for yourself, but we have a Java agent. So when you run your application, you run it as a Java application, you can pass, you can run it with this Java, the native image Java agent that basically traces your application, looks for instances of reflection, dynamic class loading, anything like that, serialization. And it saves the configuration that you would have had to write by hand automatically directly to these configuration files. Then when you build these configuration files are used to provide this kind of missing information to native image. Lots of libraries are packaging this information directly with the Java for the library. So I mean, that's kind of what the spring native stuff is doing is providing this missing configuration to you. So when you run the application, there's a step where the spring native plugin looks at the application, looks at the class path, looks at what's going on and builds extra classes and extra configuration that your application will need, the spring application will need in order to be turned into like a native executable. So there's lots of tooling that's been put in place, certainly in the frameworks that I mentioned previously, Spring, Spring Boot, sorry, Micronaut, Helilon, and Quarkus to make this like as easy as possible. If you're using a library that isn't native image friendly yet, you may have to resolve some of these issues by using the Java agent to generate the config for you and you can get applications to work that maybe use libraries that use a lot of reflection. So I'm gonna see if there are other questions in the chat. So that was a good question. Yes, the gotchas. I mean, if anyone has any further questions, please feel free to ask. And Eli, I'll go back to my slide because it's better to look at really than this. If you wanna read out if anyone has any questions, if not, thank you very much for your time today attending. I hope it made sense what I was talking about and the slides will be available. All the code is available in a repository on GitHub as well if you want to play with that. And if there are any errors or sharp edges, please feel free to make a PR. I would be very grateful. Yes, anyone else, any questions? All right. Well, Chris and Eli, thank you both so much. Thanks for a great presentation. And everyone, thanks for joining us. Like Chris said, this will be posted to the website via your registration link, also on the CNCF YouTube playlist for online programs. And up, there's one more question. We'll hang tight. Are there any frameworks or spring modules that are known to be unsupported? At the moment, I'm not entirely sure. If you go to the spring native sites for instance, I think there's documentation on there that why don't we do that? Let's just... So this is the documentation for the spring native site for the spring native project. Sorry, it's a pretty good documentation. It's worth looking through because it can explain in detail how to use, how to use this and how to build native images. I thought there was a list of things that are known to be supported. The other thing you can do, yeah, here we go. So these things require no special build configuration. They will just work. I guess that's what they're saying. JPA, for instance, DataJPA, Neo4j, Mongo, Logging, JDBC. There's lots and lots of things that are known to be working. I'm sure that if there are things that aren't working, that's also documented somewhere. I typically also go to the spring native repo and in the samples project, there's a quite an exhaustive list of samples that are known to work. So for instance, if you're interested in Kafka, Kafka Streams, RabbitMQ, you could, let's take this one, you could go into there and there should be a working project that shows you how to use that particular spring module with native image. That is meaning to generate like a native executable. I hope that answers the question. I think I probably answered that quite long-winded way. Sorry. That's great. All right, last call for anyone else? No, I think we're good, right? All right, I think so. All right, well thank you both again so much. Thank you everyone for joining us. Everything will get posted shortly and we'll see you next time for another live webinar with CNCF.