 Hello everyone. Welcome. Thanks for coming. My name is Jan and I will now in the next 50-55 minutes I will talk about Quarkus. First, let's talk a little bit about me. I'm a software engineer working at Red Hat. I kind of love Java and open source stuff. I used to work on Wildfly, which is for data virtualization. Right now, my main focus is Quarkus itself and micro-profile. So mostly the work I do is integrating micro-profile implementations into Quarkus. In particular, I focus on micro-profile metrics which you saw who was here on the previous presentation, you saw a little bit about micro-profile metrics. You will see it again in this presentation. Another thing that I do is Hokeyla metrics, which is an old monitoring solution in OpenShift 3. Other than that, it's just a little bit about me. I'm a traveler, father, whatever. Let's go on. So let's now talk about Quarkus. What is it? Quarkus is a platform for developing Java-based applications, mostly, but not limited to microservices. So what does it do? Why is it so awesome? How does it make a difference to all the older frameworks that implement Java EE and so on? Well, these are the main goals that I see as important for Quarkus, the main goals how Quarkus tries to make your development of your Java microservices awesome. It brings you developer joy. It kind of resolves some issues that people usually see in Java, in Java language as general. It puts all your favorite frameworks on the one platform. You can use micro-profile, you can use GYACARTA EE and use any other kind of framework if you write an extension for it, or maybe even without an extension. It's blazing fast. What it does, how does it achieve blazing fast startup is a thing called build time initialization. I'll talk about it in a bit. It integrates with Grau VM, so it allows you to be even faster by compiling your application into a native binary. It's cloud native, suitable for all cloud environments, including serverless. It unifies imperative and reactive programming, so you can use both in any project and combine them however you like. And it also, apart from Java, it supports more languages, JVM languages, for now it's Kotlin and Scala. Well, let's now dive into each of these goals a little bit further. So the first goal, and I would say one of probably the most important list for me, is developer joy. Well, one of the main issues people have with Java and especially enterprise Java is that the development is slow. Development is slow because you have to recompile everything, repackage, redeploy, and that just takes a lot of time. If you ever worked with Wildfly, so you must remember that if you want to redeploy your application, you have to run a main package. It produces a package that takes like 10 seconds, then you have to redeploy the application that's another five seconds or so. It just takes a lot of time. Well, with Quarkus, this is solved. We have the thing called live development or development mode, which allows you to reload the application in blink of an eye without hitting, without doing any compilation yourself. It basically compiles the application itself in the background. So it kind of feels like you're coding PHP except it's Java. It feels like a scripting language because everything just, any changes just apply themselves automatically and super fast. You will see in the demo. So what frameworks can you use when you decide to use Quarkus platform? Well, we support already a lot of the usual frameworks that people who use Java, they normally know these frameworks like Micro-Profile and Jakarta EE. Well, we support a reasonable subset of Jakarta EE, not everything. There's no EJBs and stuff, but there is, of course, CDI, there is JPA. From Micro-Profile, that's mostly about observability, fault tolerance and all that stuff. You can use REST easy and you can basically use any kind of framework. If there's an extension for it, then great. If there is no extension, perhaps you can use it even without that extension, but maybe you can even write your own extension for your framework because Quarkus is designed to be very extensible. Another goal that Quarkus achieves is imperative and reactive programming unified. So here you have two examples. One is classic imperative style programming and that's REST service. I think everybody understands what it does. It is a REST endpoint that kind of returns a message and it's written in an imperative way. If you want to do reactive programming, you can do something like this. You can turn that endpoint into an endpoint that produces server-send events and from your method, you just return basically a publisher, publisher of strings and you connect that publisher using Micro-Profile reactive messaging or something else to a source of messages and each message that arrives will be sent to the client. You can use both of these approaches and combine them however you like. If you're interested in more details about the reactive programming thing, we hear tomorrow, 2.30, there will be a presentation by Martin Stefanko specifically about reactive programming with Micro-Profile. Now let's have a look at the so-called build time initialization. It's also one of the cool features of Quarkus because it allows you to start your application really, really fast and we achieve that by moving part of the initialization into build time. How does that work? Well, normally what frameworks do at startup time? Traditional Java frameworks, when you start booting your application, they have to parse your configuration files, validate or injection points, scan the class path for annotations, build some meta models, whatever and that's a lot of work that takes usually a few seconds. Well, with Quarkus, we have a way to move most of this to not to run time but to build time. So it is performed at build and you will see a bit more about how it works in a bit and so the build will take a little bit more time than usual but after that, the application will start very, very fast as you will see also in the demo in the second half of the presentation. Another advantage to this is that you can catch even more errors at compile time than usual. Well, Java in general is a language where a lot of programmer errors can be caught at compile time. Well, with Quarkus you can catch even more errors because you don't catch only the, let's say Java language errors from the perspective of Java but you also catch errors from the perspective of the frameworks that you use because basically at build time you not only compile time but also compile your source code. It basically also kind of runs the application. So if the application fails to start, you will see it at build time and you will not have to wait to catch that error at run time. So if you, for example, have a CDI injection point which is not satisfied, then the application will not be able to build and you will not have to wait until run time to get the run time error. So most of these things like parsing the config files, scanning the annotations, everything that I listed here, that's done at build time. The result of this scanning and validation is basically serialized into your application as so-called recorded bytecode. And then when your application starts, just the recorded bytecode which is highly optimized will be executed. This has also another advantage, the bootstrap classes that do this, parsing the config files and everything, these classes which perform this task can be discarded after the build so we don't have to use them at all during run time, which again makes your application even lighter in terms of both the CPU usage and memory usage. What's the effect on the start to build up time? Well, here are some numbers, some wild numbers. How much does it take, how long does it take to start a regular microservice that contains the rest endpoint and some a bit of more stuff? Well, with traditional cloud native stacks like, I don't know, spring boot or something like this, it could take around four seconds, let's say. Well, with Quarkus, if you use Quarkus in the JVM mode, you get it down to less than one second, I think it can be even less than 0.9. And if you want to be even faster and you use the native mode with GralVM, your application will start absolutely blazing fast in really a matter of milliseconds or tens of milliseconds. So, I think that's pretty amazing. So, let's have a look at what the build bytecode recording is because build time initialization is based on this. So, at build, all the relevant configuration files, you can see that you are read, application is scanned for annotations, all the metamodel is constructed and validated. Well, and the necessary code to actually start your application is recorded. This is called bytecode recording. I will see, we will see right here, you can see that you can see that, yeah. This is an example. For example, I told you I will speak about metrics. So, for example, if you have a class that uses micro profile metrics, I have a class that contains two annotations about metrics, which means you, this means that each of these methods, one and method two, each one of them will have attached a counter and that means if the method is called, the counter will be incremented. So, you can then monitor how often your methods are called. Okay. So, bytecode recording. At build time, Quarkus platform scans, scans your class for these annotations, for these metric annotations and turns your class that looks like this into recorded bytecode, which looks like this. So, it scans, it finds the two annotations and produces some kind of bytecode. If it produces a class, it's regular Java class and it's compiled and if you decompile it, this is generated code, yeah. So, but if you decompile the code, the class that is generated, you might see something similar to this. You might see that there's something like a deploy method, which just does all the necessary things that are necessary to start your application to have your metrics registered. So, it kind of creates a metric registry, then it creates some necessary metadata objects for the metrics and registers the metrics in the registry. So, at start tap, all just this code is executed. So, you can see that there's something like this. So, it's kind of fast because you don't have to re-scan the annotations. They were scanned at build time. Now, at run time, you just run the recorded bytecode, which looks like this, so it will be very fast. Well, there's a difference between JVM mode and the native mode. In JVM mode, this will execute it as soon as you are starting your application, your jar. This is the contents of this class will just be executed as normally, it will be called and executed. If you use the native mode, if you package your application into a native binary, it gets even better because the result basically of executing this is serialized. During build time, it is serialized into the native binary itself. So, it kind of, when you execute this code at build time, it produces some tree of Java objects. With ground VM, when producing a native binary, you can basically serialize these ready-constructed initialized objects into the binary itself. So, when you start the binary, it just reads the state of these initialized objects straight into your memory. It doesn't have to re-initialize them again. So, that's why it's really, really fast. So, yeah, I basically already talked about this native compilation, or sometimes it's called ahead-of-time compilation. We use ground VM for that. It produces a native platform-dependent code. We of course support Linux and Windows, mostly. It, it, it, it, it, it, it, it, it, it, it, it, it, also just some very aggressive optimizations and code elimination, which brings itself some caveats for usage. I will talk about them soon. And the native compilation takes a few minutes. It's, it's a bit slower than the normal compilation, because it really produces very aggressive optimizations. But once, once you finish it and you produce that binary, you, as I said, it will start very fast. So, it will be, let's say, very suitable for, for serverless environments, especially, where there's a requirement that the, that the application needs to start very, very fast. Let's have a look at the, also memory, memory footprint. Well, this build time initialization and ground VM usage doesn't only speed up your startup. It also reduces the memory footprint. So, another set of numbers. Well, how much memory does, is being used when you run a, a regular application with, with the rest endpoint and some grad operations for, for an entity? Well, with a traditional stack, which would be again something like Spring Boot, Thorn Tail, Wildfly, let's say, somewhere more than 200 megabytes. Well, with Quarkus, in JVM mode, you get to 130. You can cut it about in half. And if you use native compilation, it's even, even much better. In, in this case, let's say 35 megabytes. Okay. Now, let's talk a bit more about the application boot. How it, how it works. The bytecode recording, there are two types of bytecode recorders. Static in it and runtime in it. The difference is how they behave in native mode and in JVM mode. Well, in native mode, the results of static in it recorders are, are as I already talked about it, are serialized straight into the native binary. That's where you produce objects. And these objects, object three is serialized in straight into the binary. So when you run your application, you just read these objects into, into the memory. There's, but there's a runtime in it phase, which is for, for the bytecode recorders that bytecode that cannot be serialized into the native binary for some reason. Of course, you cannot serialize stuff like opening, opening network sockets, for example. Or some IO, IO operations that, IO operations that are necessary while starting your application or running threads. All of this will be done in runtime in it phase, which is in native mode after your application is, when your application is starting. Well, when your, application is starting, you get the results of static in it. You get straight in, in from your binary. Just, just read into the memory. And then runtime in it operations are, are executed normally. So that starts application threads, binds and networking interfaces and stuff like that. In JVM mode, both static and runtime in it bytecode executes at, at boot time. Because there's no way how to serialize live objects into, into a JAR file. Java doesn't, Java doesn't support that. If you'd like to learn more about this, be sure to visit the workshop by Matjej Novotny and Martin Kouba on Sunday, half past 11. It will be about writing your first supersonic extension for Quarkus. So they will definitely talk about bytecode, bytecode recording, which is one of the main things that Quarkus extensions have to do. Okay. Now, a little bit about the, the limitations because this is all, all, awesome. Of course, if you package your application into a native binary that starts really freaking fast, there has to be some downsides to this. Well, yes, there are. There are some things that aren't just not supported in, in native mode because they are not supported by Grail VM by design. Well, one of the most important, one of them, one of these things is dynamic class loading. Well, because your classes or your bytecode is compiled into native, from bytecode into native, native code, it's impossible to, at runtime to dynamically load more classes because they're just, their bytecode is just not compiled in, in the binary is not present there, so you cannot execute dynamically loaded code. You can't use the security manager. Well, but that's probably not that much of an issue because the security, Java security manager is usually used for sandboxing untrusted code that you dynamically load into your runtime. Well, and you don't dynamically load anything into your runtime, so you probably don't need this Java security manager. Another thing that's not supported is GMX. You also can't use things like finalizers and, and playing Java serialization is not supported either. There are some things which are supported, but with some caveats like reflective operations, I have an example here. This will work, always. If you access a field named bar of a class named foo, and you access it in this way, it will work because during compilation, growVM will be able to, to see that you are accessing this exact field. So, it will be able to add the reflective metadata of the class into, add it into the, into the binary. If you do something like this, where the field that you access is not a constant, it's a variable that can change over time, then growVM doesn't know that which fields you will be using reflectively, so it might happen that this will not work. But you can solve this, you can explicitly tell growVM that all reflective metadata of this class has to be serialized, included in, in the, in your binary. So, you can tell growVM that foo class should fully support reflective operations. This can be controlled by growVM arguments. If you are writing an extension for Quarkus, the Quarkus, like, say, extension API has ways to control this. So, you can, your extension can declare that these classes will be used reflectively. Similar to reflective operations is dynamic proxies. For proxies to work, growVM kind of needs to know in advance during build time the complete list of interfaces that your proxies will implement. If growVM is unable to reason about this, then you will at run time get, get an error. But this case can also be controlled using, using growVM arguments. One thing that surprised me a few weeks ago when I was writing an extension was that all static initializers are executed eagerly. Well, obviously, if you, if you initialize your classes and serialize the result into, into a binary, of course it runs all static initializers. Well, that's, that has, that needs to be done. But if this causes problems for some reason in your application, you can control this, you can say that some specific classes should be initialized at run time, not at build time. One more thing which might get a little bit problematic is debugging. Well, of course, if your application is no longer running with regular, as a regular JVM process, you can use normal Java debuggers. You can use the general debuggers like JDB and whatever. There's one problem to it that if you want to include debugging symbols in your, in your binary, you need growVM enterprise edition. And now, enough slides. Let's get to the demo. Okay. So, I will now show you how to build a full-fledged application. That application will, what will it contain? It will contain a REST endpoint. It will have hibernate entities. It will have micro-profile metrics. And if there are entities, of course, that means accessing the database. So, we will also use a database. Let's get to it. So, first, I will, I will start the database. So, we have something to run against. I will run a Postgres database in, in a Docker container. I will use this here. So, we have a running Postgres database. And now let's start a brand new project. How to start a new Quarkus project? Well, there are several options. One option is that you can use this website, quote Quarkus IO, and then maintain by our team where you basically choose your Maven artifact coordinates for your application. You also choose whether it should be Maven or Gradle. We separate both. And then you can choose the list of Quarkus extensions that you are going to use. You can, you see there's all of the necessary stuff, hibernate, JDBC drivers, MongoDB reactive stuff, Apache Kafka, JMS, Vertex. There's a lot of things. And you just pick which ones you want. And then after you pick your Maven coordinates and your extensions, you click generate. And it will generate a zip file for you, which if you unzip that file, it contains a Pomex ML or build Gradle and with all the necessary scaffolding that you need to start with your application. Well, but I will not do this now. I will use a different approach. I will use our Maven plugin. We have a Maven plugin that allows you to create Quarkus projects. So I will run that plugin, the create goal. And now it asks me about some things like the Maven coordinates, of course. Let's leave everything at the default. We want to create a REST resource. In this case, I will say yes. And everything else I will leave as default. So with this call, it generated some scaffolding necessary to get a project up and running. Now I can open that in my IDE. I will open it in my IDE. It will also include a simple read me file to show you how to run your application in dev mode, in regular JVM mode and in native mode, all of it. You can see there's a Pomex ML that declares some basic things that you need to develop with Quarkus. Well, and now the application is basically ready to be started. So I can now show you the live reloading features of the development mode. If you run the goal Maven Quarkus dev, it will start your application in development mode, which means that any changes that you do in your sources will immediately trigger a reload of the application. So here it generated this REST resource for me. So I see that there is a REST resource on the path hello. That should just return hello. Let's give it a try using curl localhost 8080 slash hello. Yeah. And it says hello. Now let's just change something. Let's add some exclamation marks. And now I just try it again and everything is applied. The changes are applied and now I get hello with exclamation marks. What would happen if I introduced some sort of syntax error, something like this? Well, the application will run still, but when you do curl, you send a request, you will get an error. And it will say that, yeah, hey, man, you have a syntax error there. So let's fix that for now. And let's start with something more interesting now. Now I will use the Maven plugin to add some extensions that I will need to build my application. So I will add extensions. Extensions, I will add Hibernate ORM with Panache. You will see what Panache is in a bit. I will be accessing a Postgres database. So I will also include JDBC Postgres equal extension. REST easy JSON B is for serializing some objects to JSON. We will need that as well. So I run that. Oh, no, I'm in the wrong directory, sorry. What it did, basically, is that it just added some dependencies in my POM XML. You can, of course, do it the regular way that you manually add your dependencies to the POM. But it's also one of the ways to do this is to use the Maven plugin for this. I decided to use the Maven plugin in this case. But you can add your dependencies the normal way manually into your POM. Okay. By the way, it picked up a change in the POM. So it reloaded immediately. You don't even have to change source code of your application. It also reloads and when it detects changes in the POM XML. So it reloaded my application. And I see it started again. And there's a log during the startup which tells you what features you have installed. So now it includes all the things that I just added using the Maven plugin. This is not on Grail VM. This is with OpenJDK. I will show the Grail VM example later. Okay. Now, so let's access the database. To access the database, you need some configuration, of course. Well, Quarkus has this approach to configuration that there's just one property file that contains all configuration keys and values that your application will need. It's the application properties located in source main resources. So it also generated a blank application properties for me. Now I added some configuration that I will be needing for connecting to the database. So I will use the Postgres driver. There's the GDBC URL, username, password. And for Hibernate, I will tell that each time the application is reloaded, the schema in the database should be dropped and recreated, which is quite useful for development purposes. Of course, this is a nonsense in production, as you might imagine. Now, let's create an entity. I will create a class person. Let's have a class named person, and it will be an entity. It's JavaExpersistence entity just regular JPA API. But we will spice it up a little bit. We will use Panache. Panache is a special, let's say, extension to Hibernate that we support in Quarkus. It will allow you to do some interesting magic that you will see shortly. I do that by extending, extending the class Panache entity. That's an abstract class that you can extend. Now, my entity will need a name, just name in this case. You might see I'm doing maybe a little weird thing. I'm having a field that's public. Well, of course, you can use the, let's say, more usual approach where you would use a property and the field itself would be private. It will have a getter and setter. And this is just to show you that you can, if you want, also do it this way. It just makes the code shorter. And Quarkus will understand this. Now, so I have a entity for persons. Now, let's add a rest method that will add new persons into the database. So, I just copied this. Yes. So it's a post method that takes a name, create slash name. And the name is a path parameter. And what it does, it creates a person, sets its name, and then persist it into the database. You might see that it might be quite surprising that I'm not injecting any entity manager or something like that. Well, with Panache, I don't need to use an entity manager. It's embedded. It's handled for me. If I use Panache, by extending the Panache entity, I immediately get my, a lot of interesting and useful methods for my entity classes. So you can see there's a lot of static methods for, for example, finding, finding, finding instances of that entity, find all, find by ID, listing, streaming, and all that stuff. And for that, you just don't need to play with your entity manager. You just use your class itself. So let's give this a try. I will now create a person. Create a person named Joe. And yeah, it needs to be post. It now created a person named Joe. Okay. Let's create another REST method that gets, gets a person by name. You can see I, I'm using the static method find all that I received on the, on the person class by extending Panache entity. So I can say that I want to find a person and find the person where the field called name is equal to name. And the name is a parameter for, for my REST method. So let's try to retrieve Joe. Hello. Instead of create, I will call get, get Joe. And it will be a get request. What happened here? No entity. Yeah, of course. It reloaded in the meantime because I changed something in the code. And because I configured Hibernate to drop and create with each reload, my entity got lost. So I will have to create it again. So I will create again Joe. And now I can get Joe. And I, and now you see it really, I retrieved a person with ID one. That ID is a field that is just provided to me automatically. I didn't have to specify the ID. It's provided automatically. Of course, if you want, you can designate your field as the, as the primary key. But if you don't, you get an ID field generated automatically. Okay. Now another method that will retrieve all persons, resource. Now we will use another method that it's on the path all. And it calls the static method find all. It's again Panache. Our extension to JPA API. And this will return it as a list and send it to the client. So of course I will have to create Joe again and maybe Alice. And now I can get, get all the persons. I can see that it now returned to two persons. Okay. And we're kind of running out of time. So I'll try to get it up. Yeah. I wanted to show you metrics, but I think I don't quite have time for that. I will show you more important thing and that is the native, native mode. So I just stopped the Quarkus development mode by just stopping the, the, the Maven execution of Maven Quarkus Dev. Now I will, yeah, I have to remove the generative test because I think I broke it. It would not pass. Now if I want to generate a native, native package of my, of my application, I do Maven package with the native system property specified. So now what it, now it's basically calling Grail VM. You see it's executing Grail VM and it's native image utility. And you can see the, the application is really kind of starting, kind of, not fully starting at build, but it, it's needed to start the application and initialize it. And when most of the application started and initialized, then that initialized state will be serialized into, into the native binary. So this, for this simple application will take about a minute or something like that. So I think it should be ready quite soon, I hope. Let's wait a bit. It does some aggressive optimizations and dead code elimination and, and build every, all the necessary metadata and into the, into the application so that you will see how, how you can run your application very fast. It should be finished very soon. Come on, come on. This obviously takes a bit more time than, than the normal build, but once you build it, you will see how fast it, it gets. And it's done. It, so it took almost two minutes. In generated, in the target directory, we now have a, well, yeah, that's the, that's, that's the, that's what it produced. You can really see that it's an executable for, for Linux. And if you, if you run that, it says that the application started in 34 milliseconds and then it's listening on, on this particular port. And now I can just create Alice, create Joe and retrieve Alice and Joe. And it just works, works normally except it's really in a, in a native binary. It starts really fast. If you were interested you can see how big the binary is. Yeah, like 59 megabytes. Well, yeah. It took about two minutes to, to compile it. And, but once you started now, it really, it's, it's really, really, really fast. I wanted to show you metrics, but I suppose we're running out of time. I could show, I don't know. Let's, let's do questions. Are there any questions? Yes? Keep the database running. No, you'd, if I have to keep the database running. No, I think I don't have to keep it running. No. It will, it will scan the, the, the entities, the, the, the entities, build the necessary metamodel of the entities and stuff, but it will not execute any, any queries to the database. No. Well, it kind of runs, but not completely. It runs the, the stuff that are suitable for this, that, that can be serialized into the resulting binary, but not, not more. Yes? Let's show you what. LDD. LDD. The question is if I can use reactive drivers for databases, right? Yes. We support some of them. Let's, let's have a look at the code, code workers. I owe you. You see that in the, in the options that you can choose for your, for your application area. See reactive, my C quickly and client, PostgreSQL client, small right reactive messaging with connectors to AMQP, Kafka, MQTT. And that's it. Yeah. These, some of these things are in preview. They are not yet considered stable, but you can use them already. Yes? Well, the difference between the size of, of the jar and, and the native binary. Well, the difference is mostly because the, the, the native binary kind of contains all the necessary parts of, of, of, of JVM runtime, the necessary parts. Yes. Yes. Well, it, it will always be a bit bigger. Well, in this case, over 50 megabytes. Yeah. It, it all, it is bigger. That's, if the, if the jar that I have was not, this is the plain jar without dependencies, right? This is like six kilobytes. If, if my application was more than six kilobytes jar, of course this would not be 120 megabytes. Of course not. That there's a, there's a fixed, fixed cost that, that's added statically to it. Any more questions? Yes. If I, in Dev mode, if I update the configuration, it will also reload. Yes. It reloads on code changes, configuration changes, POM changes. And this is, it can reload on, on more things, depending on the extensions that you use. An extension can declare that some particular file, that specific configuration file for that extension should be watched for, for updates as well, for example. Yes. Memory footprint of Dev mode. It's obviously a bit more than if you just run it in regular JVM mode because there's, there's the scaffolding in it. And each reload means it basically discards the class loader that loaded the current version of the application, creates a new class loader and loads it again. And so, yeah, it will, the memory footprint will be higher in the, in Dev mode. Yeah. But once you compile it into a, a jar or a native binary, you get rid of that. Yes. Performance differences. Of course, in, in native mode, it's of course faster because we're not in, in a JVM anymore. We're not using byte code. We're using heavily optimized Nate, Nate platform, platform, platform, platform, dependent code. So it, it usually should be faster in most cases, unless there's some kind of issue. If, if it's not faster, that's probably an issue that should be solved. Okay. I think we're out of time. So thank you, everyone.