 So, today the topic which myself and my colleague, Dinkar, we are going to do is about zero-to-hero in Kubernetes native Java, changing, you know, landscape of computer or in the software industry. The synergy between Java and Kubernetes has led to the development of this term, which you hear Kubernetes native Java. How many of you knew about this one? The Java flavored, which is, okay, very good, just one. So, when I talk about zero-to-hero, the thing is that by the time you walk out of this session, you will have a fair understanding of what or how Java has evolved, and my next few slides are going to talk about evolved to meet the current landscape, and we also have a demo around it, okay? So, let's go ahead and talk about this. All of you know that Java came up in early 90s, right? And Java was brought out to meet a certain kind of requirement, right? In that world, there were no containers, no Kubernetes, nothing. It was just plain, you know, free tier or end tier architecture wherein we wanted to have applications which will, or enterprises wanted to run their mission critical applications, and then Java was a language which was evolved. It is something very similar, and over the years what we see is something very similar to the Cambrian explosion of biological diversity, wherein we saw new life coming in similarly in the landscape of computer development. We see that there has been a population of languages, frameworks, you know, different kind of things came up, right? And you can call it anything. You can call it the architecture patterns. You can call it, you know, the evolution of microservices from a monolith to microservices. End number of things came up, right? And finally, today we stand in a day wherein containers, Kubernetes, is the talk or the buzzword around, digital transformation is the buzzword around, and all of us are talking about it. Now, I want to talk, while I move to, before I move to the next slide, I want to tell you that when we talk about Java, or a setup for a Java, it included a very expensive hardware, very expensive software, and also the maintenance, the annual maintenance of that software was also very critical. So end to end, the total, the cost of infrastructure was too high, and to support that we built complex applications like web, like web server, application servers to run them, to use this complex architecture, right? So, but with changing times and with arrival of cloud providers, what we saw, that this cost of entry of this, you know, small or medium businesses from a very high, which was earlier in the times of Java to the tune of half a million, came down to close to zero. Now, you are able to understand how things are changing, right? This is close to zero. Now, Java's relevance is under question that why should we spend so much on a language, or you know, procuring or running an application, which is so expensive, wherein, you know, the cloud providers are showing us a way, wherein it is close to zero. Now, to prove my point, I have this slide, wherein it is a study done on the developers, the serverless developers, okay? But what they are doing is, what you see are the languages they are choosing to develop. And when I'm talking about serverless, I'm talking about, you know, low latency, I'm talking about highly responsive application, I am talking about, you know, faster time to market, you know, so that whenever the need comes up, I am going to have that code block, code, the block of code execute. And therein, you see that six percent, Java surely is not, you know, at the top of these, you know, high performing developers or, you know, which are tuned to the market, right? So given this, Java, we have to understand was designed to make the most or take the maximum of the network throughput, right? It was not designed to think of any other things as to what resource it is going to consume, how much CPU it is, how much disk it is going, that was not the thought process. The thought process was that it is designed for maximum throughput, right? And the one thing it is, that it was designed to be long running. If you have been around when Java was there, people were thinking about the critical applications would run for months together, or for years together without it. And restarting application was total no, no, nobody looked through it. If you are doing a rebuilding or changing something into the application, it was perfectly fine to wait for ten minutes, you know, operation team was fine with it if they had to wait for ten minutes to, you know, restart the application. And developers team was also okay. But in current timing, this is not something which we are willing to do, right? At the expense of the startup time, we are not okay with that. Just another thing, another proof point in my talk is that this thing what you see on your slide about the latest languages, okay, and the kind of, you know, performance, these languages, the pre-compiled language like Golang or Node.js, you are seeing very clearly that Java is, you know, consuming a lot of resources, right? Especially on a Contella platform. And as you move from left to right, you are seeing the resource utilization is much less, right? Now, having said all this, you will think, oh, she is talking about is Java dead? And this Java is dead, Java is dead, I am hearing from last so many years that Java is dead, Java is dead, but is it really dead? If it really was, you wouldn't be sitting here. There has been a reinventing of Java, right? And so many blogs you will read across the moment you do a search, you will see, and I have picked it up, you know, from so many studies, so many blogs. But then, so many people have gone ahead and said Java is done. You cannot have Java into your enterprises anymore. But it is not really dead. And in that journey, Red Hat has played a very big role. So Red Hat leads the new age Java with open shares. And they have contributed, we have contributed from Red Hat into multiple open source projects, right? Temurim is like where we built all the JDKs which, you know, give a good build of JDKs to be used in enterprises. We have contributed to Crestat, we have Grail VM. So this contribution from Red Hat has ensured. It is in the garbage collectors with very low pause time so that it can run concurrently with the applications. So here in Java, in the runtime space, Red Hat has contributed a lot into all these projects, ensuring that the consumers or the users of Java continue to get the best of the breed or the best technologies which can be built around Java technology. When I look into specifications, Jagatay, Microfile and Hibernate, even here, Red Hat teams or Red Hat engineers have contributed in a big way, ensuring that Java finds its place into the modern world of containers and Kubernetes and microservices. Similarly, in the cloud native framework when we talk about and the topic of our discussion today, Corkus, Red Hat continues to be a major contributor. And just to add to this, we have a line of product in our product portfolio called Red Hat Build of Corkus. So wherein we support the Build of Corkus which we have built to our consumer and we give it to our customers. Now, just to categorize it correctly, I have put them into the support what Red Hat gives to Java Rantime, the multi-platform support. Red Hat contributes a big way into the ARM technology. 64-bit ARM technology we have ensured or we have contributed in a big way so that Java is acceptable or used across different kind of implementation on mobile, on IoT, on edge devices. Everywhere Java finds its place. Regarding securities, again, a big thing. FIPS, right? FIPS, right? It's a big thing in government or in banking and all our components like the Red Hat Build of JDK or the REL, UBI, they are all compliant. Most of our products are compliant on FIPS so that it can find its place into highly compliant or focus industries like banking, into government sector or defense, right? Similarly, performance. When we talk about performance, we have contributed into projects which are giving that kind of edge like the garbage collection, low-pause, very minimal pause, you know, we have into the garbage collection cycle so that it can run concurrently with the application processes. Similarly, the AQ assurance, right? So we are part of the community which adopteum quality assurance community which will build technology so which will ensure that all the features which are required for Java to be adopted into a microservices or cloud native world is made possible. So these are the capabilities in which Red Hat is contributing towards adoption of Java into the newer world on the world which is looking towards, you know, microservices, digital transformation which is talking about containers which is talking about low latency, talking about highly agile environment, okay? Now, this brings me to the caucus wherein we are talking about a highly cloud native or Kubernetes native Java, right? Which is largely written. It is cloud native as you can see. It is highly optimized to be used, very low footprint. Actually, we are going to show you a demo wherein we are going to show you the basic, the key features which caucus brings into it, right? And why it is widely adopted into the serverless architecture. What makes it the default choice for using in serverless, right? And in building of caucus, everybody has contributed. Even all these projects what you see, vertex, hibernate, rest easy, wild fly, they have all contributed in making or optimizing their specifications so that caucus is able to meet the requirement which the current requirement of a Java reinvent, right? Where Java can fit in into the newer setup, right? Now, if I talk about from a developer perspective, first thing what you need to understand is that the caucus has been built from day one in the understanding that it will be a language or language for the cloud, okay? For the cloud computing. It is from the very start, it was that it is a cloud native Java, right? And to add to that, it ranks into, you know, most of the top three programming languages, okay? Continued. And as you know, Java has got close to 10 million people. So for Java developer base, the learning curve is very less. The optimizations what you see, the different frameworks which are developed, the new things which have come up in caucus ensures that caucus continues to give rock solid performance, okay? And the tool chains which we have, the ID, the dev spaces, whatever we're talking are built in to be able to utilize caucus features to the best maximum so that the developers have the best use case, right? Whether it is CI, CD or the, when you have heard about in the morning session, you would have heard about the inner loop, right? Inner loop meaning how the developer is able to utilize the various components which are available for making a very, or developing into a very agile and a very fast, intuitive way of development, right? And there is a huge community which is available for caucus to help the developers with code blocks, with maybe blocks, with different other things, you know, code snippets, everything is available. So for the development, right? Development with caucus, these are the features which are available upfront to everybody to be able to get started with caucus and adopt caucus for a cloud native environment. And before I moved to the actual demo, I wanted also to talk about the business value, right? Divis in 2020, this is a forum which actually awarded caucus to be the top new language available, right? And why it got it? Because caucus showed few things. One, for the business, it is the cost saving, low memory, fast startup, cloud efficiency, very low learning curve because the 10 million Java developers which are available have very low learning curve for caucus, right? The cost saving is very high, right? And I showed you the density, you know, when we were doing that slide when I've been talking about the, you know, traditional Java application and the new age frameworks and the languages, you saw how the density is higher when we talk about caucus, right? The faster time to market, developer, productivity is too much. The ecosystem which I'm talking about, or I showed you in earlier slide, gives you very, and the low learning curve gives you very competitive, right? You will be able to develop or you know, the business will be able to meet the IT requirement very quickly, right? So whenever there is a change required, the learning curve is less, the tools are there and it will help you keep you in a very agile mode so that you are able to meet the business requirement at the shortest possible time, right? And the reliability, reliability again, is very important and Red Hat, and the next slide I'm going to talk about how Red Hat ensures that the companies which we are talking about and I already spoke about the UBI, right? The Red Hat Security FIPS components which we have built into our builds which ensure that the reliability factor is also very high when you use caucus into your environment. So from business perspective, the value caucus brings in is around low cost saving, around reliability, around faster time to market. Now, there are a few things from the developer perspective. It is a very cohesive platform based on standard but not limited to just to it. It is, you know, in the developer parlance, that either you can do reactive or imperative, but here in this, you can do both reactive and imperative all together into one application. You have lively reload, which is another thing which Java developers look forward to and apart from others, these few features make it very, very, you know, lucrative for a developer. The memory utilization, the performance, when you are talking about REST operations, you can see how low it is when we are talking about a GrailVM and caucus. The caucus GrailVM is as low as 0.14 seconds compared to 4.3 seconds when we talk about a traditional cloud native stack. REST plus curd operations, you see how less it is, it is 0.055 seconds. And we are going to demonstrate to you how the performance is into our demo as to how the startup time is improved by multi-folds when you are using caucus. So you will see the lively role, you will be able to see the higher performance, you will see the faster startup time. And also, the combination of imperative and reactive both together into one application. This is another feature which is very, very crucial, when we are talking about reactive, when we are talking about reactive messaging like Apache Kafka, MQTTT, AMQP, all these possibilities give a very big edge to our Java developer or a caucus developer. Again, caucus uses the best of the breed of frameworks on standards. You have all the list, all the seeds. There is a camel extension to caucus. There is MQTTP, as you already talked about, there is MongoDB support, then all these rest is Ehabinate, Microfile, all these support are available. Best of breed framework and standards are supported in caucus. Now, when and where are you going to use caucus? The first thing, when you are developing a net new applications or net new microservices like, any microservices based architecture, if you're developing, given the basic features of the new features of caucus, should be your choice. Serverless, okay, on-prem or on-cloud. You should look at adopting caucus, to avoid one vendor lock-in, or you know, don't look at caucus for a monolith to a microservices migration. Look at caucus for developing newer things, newer applications. And if you are developing IoT or on-the-age application, look for caucus in that, okay? So, these are the, what you see on your screen, other technical use cases where caucus can find its usage in a big way and can benefit the organization, as well as give the developer a lot of ease and joy in adopting caucus. Now, having said this, where do we see caucus? Many of you, at this point when I started, were not sure or not had heard about caucus. But caucus is adopted in a big way, in Lufthansa, okay? In their, you know, into their, you know, the repair center. Their application is running on caucus. Asia-Castaccio, this is a financial risk evaluator. They have actually removed all their, they have developed all their application, newer application, removed all their traditional languages and started using caucus. And so has Vodafone. Actually, Vodafone, Greece used to use Spring Boot. And now they use only caucus for all their newer developments. So, caucus is finding its way into bigger organizations like the ones which we have listed on there many, many more, which are actually using it, but we are not put it here because they are not a referenceable at this point. All right? So now I'll hand over to Dinkar to give you a demo. So, before I start the demo, how many of you folks here knew about caucus before you came here? What I have here is to see how we can get started with caucus. The first step is for you to, you know, go to caucus.io. You can actually download a caucus build. And so this is the CLI. And you can install that on your, I mean, just to get started and, you know, make things easier for yourself. Not only that, we also, if you just go here, you can start coding. If you click on that, you land up here, where we have something like a spring style configuration that you can choose for creating a new application. So let's say you wanna start with a new application. You can choose what kind of application do you want it to be, rest easy, reactive. I mean, this has actually, you know, thousands of configurations that are available. So you can choose these and it'll create a boilerplate application for you with all of these extensions pre-built. So let's, you know, just dive right into it. So what I have here, I hope everybody can see this. So I have caucus installed on my laptop and I'm going to create a new application. So the way you do it is just give this command, you know, caucus create app and then give it a name, you know, some name, whatever name you want. So you do this. Okay, so before we, you know, get to further, you know, just for fun, let's also see how long does it take to, you know, create an application and get it up and running. Okay, so just to put pressure on the demo guards a little bit more. So I have now here a directory called Quarkus demo, where if you just look at this, let me open VS code. So you can see that if you go into source, it has a few things here. So it already has created a Docker folder with a set of Docker files. It's got a Java source. It's just a hello world kind of a thing. It says rest easy reactive. And then there's a test associated with this as well. And there's a target. Now if you just look into the palm.xml, you can see that it has, okay, yeah, this is something that I forget usually. I am on the bleeding edge of Java, which is 21, but sometimes it doesn't work so well. So I'm gonna go back to 17. How many of you are using Java 21 here? Oh, one hand went up, nice. Okay, so anybody on Java seven? Okay, that's good. So yeah, so if you just look at palm.xml, it has downloaded a few of the, I mean, all the ones that we expect to be seen in a Kubernetes native application. So it has downloaded JUnit and a rest easy reactive application that I mentioned and things like that. So now let's just go ahead and so we have a mode in here which says, you know, quark, quark is dev. So what this does is that it creates that live loop, okay? So the local loop that we've been talking about since morning. So here you see that it was able to bring up the application, of course, I'm cheating a little bit because I've already installed the palm.xml, all the packages and stuff like that. But even then, even when I started with a fresh new thing, there are two things to note here. One is that the application came up in, so this is built plus, you know, the application got started in 1.6 seconds. Also, now that the application is running, so, you know, I just open up the port where it is running, it's local host 8080. So it shows that it is up and running. I go here, it shows you, you know, the rest easy reactive, hello from rest easy reactive, which is what we saw is here. So let's do some small thing. Okay, before that, how are we doing on time? So it took, you know, two and a half minutes. So now I'll just change this to, let's say, Dev Nation, okay? So hello from Dev Nation. So I just made the change and I'm going to go here and do a library load and it immediately comes up. So if you go back here, you see that the live reload took 0.7 seconds. So it made the change, it saved it, and then it built the changes. I mean, it picked up the changes and rebuilt only those classes which required to be rebuilt and then it launched the application again all in 0.7 seconds and you were able to see this one. So this, only when I actually reloaded this, that triggered that reload and I mean, that triggered the actual build and then that's how we were able to see this. Okay, so now we are able to see this and it took about three minutes, right? So this is just a very simple, nothing fancy out here, but I mean, it gives you an idea of how quickly you can get started with an application and, you know, see it running. So the next thing is now we want to be able to, you know, obviously we've been talking about Kubernetes. So, you know, there's no fun if you just run it on your laptop, we want to be able to deploy it to Kubernetes. So what do we do to make that happen? So we need to add an extension. So you can say, Quarkus extension add and I'm going to say, open shift. So keep a look out on the target here, the target you see here and then let me add this. So when I added this, the live reload, actually, so that, you know, it is doing its work at the background that is going on out here. So, and you can see that it created a new Kubernetes directory and you see a few JSONs and YAMLs out here. So this one, what happened was that as soon as I added the extension Kubernetes or open shift to this project, it created the Kubernetes deployment YAML for me. So I don't need to worry about that. So as soon as I created the project, the Docker files were already there. So I don't need to worry about that. And now when I added the Kubernetes or open shift extension, it created the open shift YAML as well. So here, you know, there are, I mean, so the thing is that, you know, Kubernetes, there's lots of new things going on all the time and it's an entire world by itself. The question is, you know, do you need to know and be an expert in everything, right? I mean, do you need to be an expert in Docker, Kubernetes, Java and all the other frameworks and so on? Or do you just focus on your business logic and, you know, be worried about, you know, how do you do the best or, you know, fastest turnaround time in terms of time to market and things like that. So that's where this is really helpful. So, okay, so this just added this. So, you know, the open shift YAML. So now the question is, okay, how do we now deploy it and where do we deploy it to? So what I have here is, let me go back to this Firefox here. So I have a developer sandbox. Let's just get this here. So developer sandbox, I'm sure you've been hearing it since morning, it gives you your own open shift instance where you can try things out. Once you instantiate it, I think it's available for a month. I guess that's what Yashwant has been telling me here. So this is the sandbox that I have running. And, you know, don't worry about this one. There's some, you know, application that I had it running, but essentially it gives me a space where I can deploy my applications too. Now, what I'm going to do is to log in to this instance that I have from my laptop so that I can deploy the application that I just built onto this sandbox. So I'm going to go back here and let's do an OC login. So this is going to login into that particular open shift sandbox. So now I'm going to deploy this particular application so just to speed things up, I've got this command lines here, which I'm going to just add. So it's just doing a caucus build and saying, okay, deploy to open shift. So this is going to just, you know, do the same thing in terms of building and testing. And then it's going to now, oops, I forgot one thing which is that, you know, we made a change here to the code that said Dev Nation, but then I forgot to change the test, so the test failed. So this added a test as well. So, you know, obviously test-driven development, you need to start with the test first maybe, but then I'll make that change. So hopefully this time it should work. Okay, so while this is coming up, I think it's going to take some time for this thing to get pushed, okay? So this is, while this is going on, let's go back to the local instance. So if you go back to the local instance here and there's something called as a Dev UI. So this Dev UI is a sort of a portal for the developer to check out everything about the application that they're developing. So, you know, what are all the extensions does it have? So this is the configuration. So you want to make, I mean, Quarkus has literally thousands of configuration parameters. You can go in and edit any of those right from here. So it shows that, okay, these are the endpoints that is there. It even has a continuous testing. So for example, if I just do start out here, okay, this is still going on while that is going on. Okay, it shows that it's all done. Let's see how are we doing here. So, okay, the build is successful. So this means that the deployment has happened. Let's go back, check this one here. So you can see that Quarkus demo here. That's the one that we just created. Okay, one thing which I forgot to do was to do a O C expose and what did we call this, this is Quarkus demo. Okay, so this one is going to create a route for me. So basically, once you deploy the application to OpenShift, you need the way to access that application. So this is just a way to do that. So you can see that this open URL popped up out here when I did that. And essentially, yeah, there it says. So here we can see that it is running on the sandbox that I just created. If you go here, you have the welcome to Dev Nation, which is what we created. So how long did that take? Around 10 minutes. So from the time that we started fresh with a new project and all the way to pushing it to Kubernetes and having it running, it took just about 10 minutes. So now let's try to do something a little bit more complex with this application. I mean, obviously it's very simple. So probably the easiest thing would be to add a database and play around with that. So for me to be able to do that, the first thing is you need to add some Quarkus extensions. So to make it easy, there is a Quarkus extensions plugin or extension that's already available that helps you to pick the right extension for you. So here I want to pick Postgres. So I'm going to be picking the reactive Postgres SQL client because this is the default project that we created was a reactive one. And then we'll add the Panache reactive as well. So let's just add these two. So here it's updating the Palm.xml to say, you know, we are adding the two new, the extensions. So I'm going to do a, just to be on the safe side, a Quarkus build to make sure that the Palm.xml is picked up, all the changes in the Palm.xml are picked up. And while that is happening, I think you might have already also been talking about Podman. So what happened here is that you can see, I have Podman desktop running, and it says that it has downloaded a test containers and a Postgres SQL. So you can see that Postgres SQL is running 37 seconds. So what happened here was that when I added the Postgres extension, Quarkus detected that, you know, you don't have a Postgres SQL container running. And so it downloaded this, because I have this Podman desktop running, it was able to pull down the test container and a Postgres instance at the same time. And now I have a Postgres instance that I can play around with, you know, with just minimal effort. Okay, so now that that is in place, let's add a new class here. Let's call that person.java. Okay, so just to speed things up, I have the code out here. So this is, so this is actually a very simple, let me just talk about that. So here, I've just added an entity and the person class extends Panache entity, which helps you to provide most of the boilerplate code, like for example, the getters, setters, and also additional code like, you know, so we are going to use a list all. So that kind of boilerplate code is generated with the help of, you know, this Panache entity. So now I'm going to go to a greeting resource here and create a new endpoint. So add that here, a new path, let me just copy that here. So what I'm doing here is, I created a new path called, you know, slash hello slash people, where I'm returning a JSON type and this is actually returning the contents of what I have out here. So here in this particular Panache entity, I'm just trying to get the names and the Twitter IDs of the people that I'm adding and that is what is getting returned out here. So this is a person.listall. Okay, I think I'm missing something here. Just a second. Yeah, I need to choose this import. Oh, let me add the import here. That should help with that. Okay, so now that I have this, let's see if this is still running. Let me do Quarkus build again. So what I can do is now just test this application to see if I have everything in order. That should just bring it up. So it is talking to the Postgreed database that we added, hold on, what is this? Yeah, I think I forgot to add one more. So because I'm returning a JSON object, let me add that extension. So I need to add another extension. That's the JSON-B, JSON-B reactive, this one. Okay, hopefully that should help. Let's go back here and do another Quarkus build. So hopefully this should bring up the application. I didn't need to do a build, but I've noticed that sometimes whenever I'm doing pom.xml updates, I might need to pull down the resources. So just to be on the safe side, I'm just trying to come on. Okay, so now that this is up, let's go back here. Okay, what's going on here? Okay, yep. So it's up, and now if I go back to the UI, and I go to hello, and then going to people. So I mean, obviously there's nothing to return, so it's not returning anything, but let's add something into the sources here. So let's add a new file called import.xql, and I'm going to go back to my editor. So I'm going to just add some contents into the DB here, which is getting inserted. So I'm just adding the name and the Twitter IDs. So now that this is done, I think I'm missing one more thing. So in the application properties, I need to mention that it needs to read from this. So let me just add that. I think this is what I had missed earlier. I had not created the route on OpenShift, so I can add that one and also make sure that the import.xql is right out here as well. So okay, so now that those things are set up, I can go back and do a Quarkus build again. I'm just doing that because I did a change in the configuration. Okay, so if I now open this again, or maybe I can just reload this. So it's able to pick up from the DB and show you the contents. So again, so just adding DB and populating it with some contents and just be able to get the quick feedback loop. So in just about 20 minutes, we are able to get here. So for you to be able to deploy this to OpenShift, it takes a little bit longer because obviously the configuration on, I mean, creating a database on OpenShift and being able to connect this instance with that one, it's a little bit more of a work. So I mean, because we have only about three minutes left, I have instance of this which is already running. So I'm gonna just show you the changes that I did rather than trying to show us something that's already running. So what are the changes needed for you to be able to run something like this? So if I go back to, okay, so this is another instance of the same application, but then with all of the proper configuration done to be able to run on OpenShift. So if you look at here, we have, so I've added a number of parameters here. So the percentage prod here says that, you know, this is only meant for, you know, the production, I mean, or the OpenShift instance and not locally. So I have a number of configuration parameters like, okay, what is the database kind? That is PostgreSQL. And then, you know, the URL from where I can access it, the user ID passwords. So those are obviously, you know, we don't want to hard code them. So they are passed in as secrets. And so you need to create a config map for you to be able to do that. And so I'm saying that use this particular PostgreSQL secret. And I mean, say that this is the secrets, config secret, Kubernetes config is enabled and that is where the secrets are stored. So these are the configuration parameters that needs to be added in. And once you do that, the command to be able to run that remains the same as before. So you just need to do a Quarkus build. You can see right at the bottom, Quarkus OpenShiftDeploy equal to true. The same thing that we ran earlier. So this one is, I mean, of course, as I mentioned, so there are a few more things that I had to do. So I had to create this PostgreSQL instance. So how did I do that? I went to, you know, go here to add database. And so this is all in the sandbox, the developer sandbox. So once you have created your own sandbox, you should be able to do all of this stuff. Here, PostgreSQL is what is the thing that you need to choose. Then you do an instantiate template and then fill in some parameters out here. So you can, for example, put in your own username, password in here and the database name. So for example, I think in our case, we were using Quarkus, so I have to change this. So this is all that needs to be done from on the OpenShift sandbox side. Once you do that, you get an instance that comes up on OpenShift just like this. So you can see that it's up and running out here with all of the changes. I mean, in terms of the username and password and things like that that have been set. And also a config map has been created. So if I can scroll down all the way, okay, I've got too many instances running here. Okay, here it is, so QDemo. So this is the one where I have a few of the config parameters saying that, you know, you pick stuff from here. And so this is where you set up, like for example, how you want to share the user ID password between the postgreen stance and your application. So you create a config map, feed in all of those and say that, you know, this is what will be picked up. So once you have done these and these changes out here, you should be able to then deploy the application and it comes up on OpenShift. So if I go back here, I go to topology, this is the one QDemo. So this is the version which is now connected to the DB. Here I'm going to just go back to the same instance. So here you can see that in this particular case, I was putting in the location and not the Twitter ID as I had shown earlier in my local app. So this is on the sandbox and here, the name and the location is being shown. So the last thing I wanted to leave you with was the importance of setting up metrics. So when we are looking at a local demo, for example, okay, I don't have that running here. So let me just start this, sorry, Quarkus Dev. So the, I mean, it's very easy to attach metrics to the application and it's really important that you do because metrics allow us, I think, in the morning, there was a session which talked about observability and the better the observability, the better, you can do lots of things with it, machine learning, right sizing. So one thing that I really wanted to leave you with is that we have, and this is actually a shameless plug for me because this is what I work on. So we have OpenShift Insights. So Red Hat Insights, I don't know if you use, how many of you use that? Any of you who have got, how many of you use OpenShift? Oh, just one hand, okay. So if you have OpenShift, then you also have access, free access to Red Hat Insights. And here, it's a preview feature right now. What we do is to capture the metrics of the application that you are running and then monitor it for a long period of time and give you right sizing recommendations. So for example, currently the application has been, in this particular case, has been deployed with, let's say, 300 millicores, but we are giving a recommendation here that says, you just need 20 millicores and 5.8 for MIB instead of 300 millicores and 512 MB. Why is this important? Because it helps you to do a lot of cost savings. And at the same time, so we provide different kinds of recommendations, whether it is for cost optimized or for performance optimized. Depending on what recommendation you choose, you can apply that configuration and it will give you that kind of a result in terms of this thing. So that's what I wanted to leave you with. So are there any questions? Actually, a really good thing to do. So what I was facing was, in microservices, which does not contain any endpoint, let's say, and it contains a database connections and everything. How do we test microservice like that with using integration testing? Essentially, integration testing is a black box testing. So we need to have an endpoint. Everything is closed. Yeah, I mean, you can write. So in this particular case, of course we have optimized it for it to be able to have an endpoint and where we can pull from that endpoint. So you need to do a little bit more work in your case, where, so you need to have some criteria, right? So whether it's a unit test, so in this particular case, it's a unit test. So you need to figure out, I mean, just like your regular unit testing, whatever it is that you should have some criteria for that, right? Whether when you run the application, maybe it prints something to the log or maybe, you know, it creates a file or some other way to figure out, you know, is it doing what it is supposed to be doing? So you need to create those test cases and then, you know, feed it into this one. So all that this is doing here is to run the test, that's all, right? This test happens to be doing, let's say just a curl of that endpoint and then checking if it matches with what is there. That's pretty much all it is doing. But in your case, I mean, the same framework will call that, you know, the test that you have and you just need to write the appropriate test case for that, that's all I can say for that. I mean, okay, I don't know if you have something right now, maybe we can take that offline, looks like something very specific. And another doubt I have is that this isn't cloud native application, right? So any kind of reflection that Java reflections that we have, runtime class instantiation, those things, is that supported by Corpus? Yeah, I mean, so, yeah, there are, I mean, I would say that the, I mean, in our case, I did not show that particular example out here, but, you know, we do have some examples, you know, for that particular, you know, instance that you mentioned as well. So you can, dependency injection is, I think you're asking about dependency injection or, yeah, so the thing with Corpus is that, you know, when we look at, I mean, it is doing build time optimization, right? So that it helps you to, while you're running. So the difference between normal Java and Corpus out here is that it is resolving all of the dependencies at build time and, you know, packaging only the relevant stuff so that when you run it, you know, you have everything up and running and it, you know, works like, especially when we're, you know, combined with Graal, for example, you can create a binary in that instance. It's just like a C program or something like that. But in your case, what you're saying is, you want to, you know, inject a new class which may not be available at build time. So there are some things which, you know, I mean, I don't know if I can say that it's 100% supported, but there are examples where we do have that as well. Any other questions? Okay. Not, I think we are done.