 I want to show you a simple example we call side by side. The whole concept is very straightforward. You take a Node.js application, a Java application, and a Go, or Python, or .NET. It doesn't really matter what it is. In this case, I have Node.js, Spring Boot, and Quarkus deployed side by side as Knative serving examples. So if I come over here, you can kind of see we have a kube control get pods. If I do a kube control get kservices, you'll see there's three Knative services deployed here. And there are the URLs. And actually, if you had access to the internet at this moment, you could just invoke those URLs and watch these pods come to life. But all of these, of course, are out there as kube control get CRDs, grep, Knative. So if I look at the different types of extra customer resource definitions that Knative has added to my cluster, then that's how this magic happens. If I come look at my console, so my Opushev console will have, let's see, these installed operators. You can see I got a bunch of them here, like Kafka. I have Elasticsearch, basically I have Istio, Elasticsearch, Yeager, Keali, known as the service mesh. But the serverless one, the Knative one, is the one we're talking about right now. And that gives me, of course, the pods in my Knative serving. But that basically means I can now deploy Knative serving components to the system. And they just have this dynamic autoscaling capability. So let's go ahead and do this. Let's actually come back over here to kube control get kservice. There we go. And maybe to kind of prove my point, let's actually just kind of highlight this right here. And then I'll say curl. And then what is curl at one time? And you'll see it come to life. There it comes to life. And you'll notice my curl is waiting. It's waiting for that quarkus application to come up. And then, of course, it'll dynamically scale up that pod, give it, once the app server comes to life, it'll respond to my request, my curl request. And you can see the example there, responds with a loha. Let me try my Node.js one. Let's curl it. And there's the Node.js coming to life at this point. And actually notice it's terminating also. My quarkus one is terminating. And that is because I am no longer touching it. So we have this set to aggressively downscale. So the beauty of Kubernetes, or sorry, the beauty of Knative is that as long as you continue to interact with the pod, interact with that application, it'll keep it alive. And as soon as you stop interacting with it, it'll downscale it automatically. So I'm going to run a little, this little script I have down here is basically a curl in a script, a while loop. So it'll actually keep it alive. So let's actually bring the Node.js one back to life. And then let's kind of look back over here at our spring boot one. Let's see it. And let's kind of bring it up. Here we go. And I'm going to say curl that one and get it up to life. And you'll notice I'm using all my command line here. And I could easily kind of done this from the graphical console also. With the developer console, I could have come here. And you kind of see these guys coming to life. Their spring boot coming to life right there. But if I wanted to add a new component, I could say add. Maybe I want a container image. And I'll just use one I know of used before, OpenShift Hello, OpenShift as an example. OK. And it'll say, let's add it to this app. Let's do unassigned. And let's give it a name. I'll just call it two because I've done this before. And I'll say, make it a Knative service. So this automatically gives you the dynamic auto scaling out of the box. And it also gives me an exposed ingress. In this case, what we call a route in OpenShift. And if I hit OK there, you'll see it start coming to life as well. So there you can kind of see there's my running Node.js, my running Spring Boot, my running Quarkus applications. Let me come down here, actually, and make sure my curl is running for Spring Boot to keep it alive. Because otherwise, again, if you're not touching it, Knative will auto-scale it down. You can kind of tell Knative is involved with this too. See the 2 by 2? That's because the Knative sidecar has been added to that pod. And so look for that 2 by 2. If you have a 1 slash 1, you probably have deployed it as a normal image, a normal deployment, not a Knative service. Lots to be said about how Knative services work, but it's actually very strict for it. You can see how I basically dynamically deployed it right from within the console. Here's this one here. I can use the URL to bring it to life. In this case, the curl command again, if you will, or in this case, the browser clicking on that exposed URL brings it up. It says, hello, OpenShift. I come over here. You can see it's a nice blue, meaning it's up and running now. So the beauty of Knative is that in the serverless world, things go up and down. The last thing I want to show you, let's double check that these things are up and running. We have our Node.js interacting with. We have our Quarkus. We have our Spring Boot. And you can see they're all running right here, including the Hello, OpenShift, by the way, which I said just ran from the browser. But there's the Spring Boot one. There's the Node.js one. There's the Quarkus one. Let's also look at the OpenShift console, which has Prometheus built into it, Grafana built into it for monitoring. We're going to basically look at our Grafana console for the last five minutes. You can see we're in the Prometheus data source. It's under side by side. And right now, what I want to show you, if we can show you, there's the CPU quota and memory usage. And let's see the memory quota down here. So if we look at our Spring Boot application, it's currently using 120 megs of RAM right there. Quarkus application using 38 megs of RAM. And the Node.js application using 45. Oh, my 45 moved around. So Node.js using 45. So this should be interesting to you guys right away. Basically, our Quarkus Java-based application is smaller than the Node.js application. Runs faster as well. Of course, it's compiled to a native binary there using the Grail VM technology. It's one of the powers of Quarkus. The same programming model as you would have in Spring Boot, but Spring Boot, of course, is being 120 megs of RAM. So when you're building serverless-based applications, you definitely want to be thinking about the performance characteristics of it. Is it super small? Is it super fast? That's why in Quarkus, we call it supersonic subatomic Java. But that's my quick demonstration of the side by side. I'll provide this link, the link in the notes below to the actual application, so you can try it on your own OpenShift cluster.