 So hi everyone. Thanks for joining. So let me start this. Yeah. So my name is Ali. I work for Red Hat. I'm a senior software engineer. I'm working on a Knative project and OpenShift Serverless at Red Hat. Today I'm going to talk about Serverless Java in the cloud-native world. So I work on Knative eventing. I'm an Apache computer since 2010. I'm not very active, but you know, we like Apache. So yeah, that's why I wanted to have it there. I live in Istanbul. I work remotely. So I'm going to talk a little bit about Knative and then a little bit again, a little bit about Quarkus. So these are big subjects. You cannot explain everything, but I will touch the important points why Knative or why Quarkus matters and why you can use Quarkus in the serverless world with Knative. I will also do some small demo. So okay, let's start with Knative. So the story is a bit long. So whatever, you know, the good old monoliths, they were broken down into smaller components. Then came the microservices era. And then there were big problems about, you know, how to do microservices, all these little things. Then came containers. And then there was issues, there were issues with containers because it was hard to orchestrate them. Then came Kubernetes. And now we are already in the next stage. So Kubernetes is cool, but we want to make better resource utilization and then also make things easier, even on Kubernetes, which is kind of easy compared to containers and all the other bare metal stuff, but to still deploy and managing your workload on Kubernetes is still hard. So Knative kind of helps with all of these stuff. So Knative is, this is the definition, Kubernetes-based platform to deploy and manage modern serverless workloads. So that means it works on Kubernetes, on any Kubernetes. There is no vendor lock, which is a very, very important thing when you think about all the other serverless options. It is serverless on Kubernetes, but there are big discussions about definition of serverless and etc. So I will not go in there. So it can run on public cloud or it can run on premise because it doesn't matter. It's your Kubernetes cluster, wherever you have your Kubernetes cluster, Knative will run on it. And the good thing is it's not fully abstracted. You can still access to Kubernetes stuff if needed. You can still have your pods, you can still have your regular deployments and stuff, and you can actually use them in Knative. That means you can construct your system, you can build your system with Kubernetes primitives and Knative goodies. And it's not just functions. What does that mean? So you don't just upload a zip or something, you just have full lifecycle and everything. And you can even use the regular Kubernetes stuff with Knative. You can bind your Knative services or sources, for example, with your Kubernetes pods or Kubernetes deployments. So there are two big modules. There are two modules in Knative serving and eventing. Serving is running your applications as pods. So if you have your application runtime, it doesn't matter. You can have Java, you can have Python, PHP, whatever you like. As long as it's running in a container, you can run it in serving. There are some, of course, not limitations, but there are some requirements. Like it should respond to port 8080, et cetera. But as long as you have that, yeah, you have your application running with serving. And eventing is it is there to manage your events. So in a serverless way, in an event-driven way. So when you have your events coming, you can just send them to serving services so that they can scale up, down, and things like that. I mean, this is like the basic, most basic usage. And both modules can run standalone. Okay. So Knative serving. It can autoscale with scale to zero. It supports scale to zero. So when there's no requests coming to your pod, it will, or your service, it will destroy all the pods, it will scale down to zero. Or when there's a huge load, it will just autoscale to, I don't know, 100 pods, and it will help handling the spikes in the requests. So it's not just about scaling or, in general, not about autoscaling, though. It provides lots of nice stuff about networking or managing your applications or you're managing your workload, basically. So it has some nice events, networking features like, or like revisions and stuff. So what that means revisions is things are immutable, immutable revisions. So when you create a revision, when you create a service, and then you create a revision for it. And then you can always come back to the previous revision very, very, very easily. So this is like deployment functionalities that Knative serving provides. It provides things like traffic serving, traffic splitting, and some of the also, these are the things that I mentioned as advanced networking, things like that, or not just randomly splitting, but also splitting the traffic based on requests, like if you want to handle, for example, one scenario would be, if you want to handle requests from some specific users, or redirect requests from some specific users to a specific version of your application for A, B testing, or for other reasons, yeah, for example, these kind of stuff you can do with Knative very easily. So there are some simplifications, though. So it can only listen one part of your application. There cannot be any persistent volumes. I think they were working on it. I'm not sure what the latest status is, and you can only have a single container. So these things are there to make things easier, to make it easier to manage your workload. I will also talk a little bit about Knative eventing. So here, I will, the title for the session is event-driven serverless Java. So I will talk more about Knative eventing. For Knative serving, we can think as like, you can run your application with it, and it will scale down to zero. It will scale to 100, if needed, and things like that. So Knative eventing is, again, very similar to Knative serving about running serverless workloads. But this time, we are more interested in event-driven part of our event-driven behaviors. So there's basically two parts. There is the core system, and there are also the plug-and-play components, like core system. What I mean by that is the eventing, Knative eventing's own control plane, which is handling things like generic things like subscriptions and things like that. But there is also some plug-and-play components, like, for example, Apache Kafka source or AWS SQS source or Kafka sync and things like that. So these are more like technology-dependent or components that talk some specific protocols so that you can just take them and put in your event flow, and you don't have to write your own application, for example, to talk to AWS Q system. So there are some basic building blocks for eventing. So these are like the lists, but these are like generic names. So source, okay, source for what? So source is, yeah, I will go detail in these, but let me start with source. So source is basically very simple. You have a message origin, and then you receive an origin specific event from that origin. The source's job is to convert that into a specific event type, which is called cloud events type, so that Knative can understand what it is, like, or Knative can understand what the event is. Knative always talks in cloud events, so you cannot just send, for example, in terms of, let's say, GitHub. The message origin here would be GitHub, and then you would receive some events from GitHub to your webhook, for example, in normal case, when you don't think about Knative source, which will have, like, it could be, I think, GitHub sends JSON, and what source does is convert that, take that JSON, wrap it in a cloud event, which is also, which could be also JSON, or it could be kind of binary format, and then source's job is then over, because it will get the event, it will send it to the sync that is defined in the source. So source is basically the thing that fetches events or that receives events from origins, and then it helps getting the events into Knative eventing system to your workload. So some examples are Apache Kafka source, so, yeah, it, for example, Kafka source, what does is imagine there's a topic here in the message origin, there is a Kafka topic, Kafka topic or the source will consume messages from Kafka topic, and the messages doesn't always, the messages in the Kafka topic doesn't have to be, like, JSON or something, it could be, like, plain text, and then the source will convert it to a cloud event, which could be JSON or binary, and then it will get into your event flow. Or other examples are AWS SQS, or Kubernetes API server itself, if you want to use Knative as, like, something to monitor your own cluster, then there's a API server source, which receives events like podcreated, namespace deleted, et cetera, things like that. Or there is also something called container source, which is actually useful if you want to convert your legacy applications to Knative applications, so you can just use any container with container source, the container source will receive events from, like, from the container and it will send it to the Knative eventing system, basically. Okay, so I talked about source, there is channels, subscriptions, broker, trigger, et cetera, there's a lot of stuff. So, again, source to sync, okay. Okay, we have received the message from origin, and then the source converted to cloud event, then there's the source has, it is coupled with this thing, so source knows where to send the event. Example syncs would be, like, what source can talk to can be a Knative service, like a user application that is running as a Knative service, so that when you have a lot of messages coming from the source, Knative serving will know that there's a lot of requests coming for the service, so let me do, like, autoscale the service up so that you can handle the, or you can consume messages in a faster rate. Or it can be, like, plug-and-play Knative syncs, you can just get your message from, for example, Kafka, convert it to cloud event, and then send it to Redis. I mean, this is, like, the most simple scenario, we will definitely have some processing in between. Or it can be a Knative channel, which I will talk about now. So, channel is another primitive in Knative eventing. What channel does is it supports fanning out. I mean, that's the most important aspect, and it provides decoupling. So, when you send events to a channel, so the channel doesn't really receive events, like the source, but you can have a source here on the left, so the source can send events to channel, and then channel will send the events to its subscribers. So, here I have one, two, two events, and both of the syncs here. What I mean by sync is it could be anything, it could be random user applications, it could be Knative serving application, etc. The channel will deliver these events to the syncs. So, some example channels are in-memory channels, which are not really reliable in a production environment. So, you just send it, it will keep it in the RAM. There is also Apache Kafka channel, and there is GCP Pub channel, for example. So, what's happening, what could be a good example is Kafka channel. So, if you use Kafka channel, unlike in-memory channel, you persist the events you receive so that you don't lose events, and also you synchronize your consumers with your producers. So, in Kafka channel, what's happening is, let's say in the Kafka topic here, I have these three events, and another event is coming to the Kafka channel. That event will go into the Kafka topic, and it will stay there until it's, that event's turned to be processed by the Kafka channel. So, this is very useful when you have, like, a lot of, when you have an event producer, that is generating a lot of events, but you cannot keep up processing these events. So, Kafka channel will know when the processing on the right side is done, because what happens is, it sends this event to the processor, or let me say consumer, and it will wait until the request is 200. Like, it's finished. So, this provides this kind of synchronization mechanism, or it is making sure this thing doesn't get, or the, what I mean by this thing is the consumer doesn't crash, or it doesn't autoscale to thousands of nodes or something, and it doesn't, and it will lag, but it's okay. Okay, so, event comes here, it will process the first event first, and then it will then go with the second event, and the third event, and the fourth event. And if you have things in parallel, it will keep the indexing, or where the processor, or the consumer actually, where the consumer was last at. So, what I mean by this, maybe this consumer here at the top, which is not there, but yeah, it is way slower than another consumer. So, it will not wait until the slow consumer is actually consuming the event. So, this is from the enterprise integration patterns book. There's a, here there's a pattern called publish subscribe channel. So, what happens here is there's a message coming from the origin, and this event from the message origin or the message is received with a source, but there is also other events coming from other sources. It could be even a in cluster application, it could be another source. So, all these things can be gathered in a channel, and then they can be sent to the subscribers for the channel in a single point. So, what this provides is you really decouple your syncs with all these events, event sources, event generators. So, think of this like a broker or a message bus, but we actually have a better solution for brokers. So, this is again another enterprise integration pattern, which is called content-based router, and this is how you can do that with Knative eventing. So, we also have a construct called broker, and brokers talk to triggers. Here it is very similar to the previous one, if you have a look. It's very, very, very similar, except with this, this is an improved version, you can say this is an improved version of the previous system, and with this one you also have filters. So, this thing can say I want to only receive green events, which are sent by this source, and this thing can say I want to receive only the blue events, and which is sent by this source. And I mean, what syncs actually don't talk to broker, but syncs say, syncs create or you create a trigger, which tells broker to send these blue events to that sync. Okay, so I found this online, honestly. This is a good example of how you can do a big data and big data in machine learning pipeline. So, this is like general structure. You have the collection phase here, where you collect events from all these different sources, mobile devices, browsers, tweeters, etc. And then you send them to, for example, in AWS case, you send them to S3 or some other place, etc. And then you prepare the data, you make it like you convert the data to another format, because if your computation requires it, and finally, you compute and then you present what you computed. So, this is like preference architecture or something like that. And if you want to do this with K-native, this is one example. And you always do these stuff in a serverless way with K-native. So, for example, in the collection, you have a bunch of clients, and then these clients send data to, let's say, REST endpoints or MQTT, or you can directly talk to Kafka, like your clients can send events directly to Kafka. So, this is the ingestion phase. And your data lake here is Kafka. And when you want to prepare your data, what you can do is you just get events from Kafka, and Kafka source will make sure, or not Kafka source, but K-native service, K-native serving will make sure this service is autoscaled to even thousands of pods if there's too many events coming from the Kafka source. So, you can just send everything to Kafka. Kafka source will fetch them. It will send it to K-native service. You can have a K-native sequence here. So, there's a special construct called sequence. You just, what sequence does is you process something here, or you compute something here from your row images and row events, sorry. And then the second service will prepare the events a little bit more, like they can mutate the event a little bit more, or they can compute something new based on the output of the first service. And then it can send it to another Kafka topic using a Kafka sync. And then with that, you can, so eventually you have your data warehouse where you have, where you can present your data with dashboard, or you can send an email, etc. So, what you write here is, or what you don't write here, let me focus on that. So, you use Kafka, that's for sure, you use MQTT, you write your own rest services, but here is the good part. So, you don't have to do any custom application to talk to Kafka, or if you don't, like, for fetching events from Kafka, or similarly, you don't have to talk to Kafka again to send your events to Kafka. And one advantage of this is you are not locked into Kafka. If you want to switch to, for example, I just put that randomly here, Google Cloud Storage, for example. What you need to do is you just change your sources, and your syncs, and if you use channel, maybe you can use Google GCP sub, PubSub channel, and something like that. So, here I had Kafka, and I decided I want to use Google Cloud Storage, and then I will just use Google Cloud Storage. My custom applications here that mutate the events, they're still the same. Here, I can have a Google Cloud Storage sync, I don't know if it exists, but I'm just giving, like, I just want you to understand the basic idea. And if something like that exists, then I just give it to Google Cloud Storage sync, and then the rest is still the same. So, this is, like, the big idea of the plug-and-play components. Otherwise, you can just talk to Kafka yourself, you can write your application that receives from Kafka topics. You can do all these services with or without Knative Serving, that means with or without serverless. Then you can just, again, write your own application to send events to Kafka sync. But if you want to switch from Kafka to something else, then you need to write again. And also, all these Kafka sources, Kafka sync, Kafka channels, et cetera, they're developed in the upstream community, and they are, like, a lot of eyes, with a lot of eyes, they're battle tested, they're used in production. So, maybe it's a better idea to just use these stuff, plug-and-play components, instead of writing applications that talk to Kafka. So, I know all of these stuff, like all these things that are talking to Kafka or to other systems I know about Camel, Apache Camel. So, there's also, you can do a lot of integration there. But here, the advantage is you can do all these stuff in a serverless way, and you make use, where you can use Knative Serving's nice advantages. And you also use Knative Eventing to make everything event-driven and scale to zero, if necessary. Okay. So, this was kind of Knative, and there is also Quarkus. I think I saw a bunch of sessions already in the calendar for the conference about Quarkus. I'm not a Quarkus expert. I will just show how you can use Quarkus in a good way, so that you don't have to learn new stuff for writing your applications that can run on Knative. Okay. So, Quarkus, it's a best breed of Java libraries and standards. It's Kubernetes native. It uses the regular Java stack that many people know, like all the Java people know. The very important point here is basically two things. Fast startup time, which is really important if you want to have something serverless. Hold starts. We don't like them. If you want to scale from zero to 1,000 pots, it's really important. Also, the low memory and disk footprint, because, again, if you want to handle things with Knative, like, almost in real time, you will need a lot of pots. You will use a lot of containers. And it is very important that you have low memory requirements. And also, GrowlVM, it's leveraging, Quarkus can leverage GrowlVM for even lower footprints and even faster startup times, which are, like, ridiculous, really small, really fast. And Quarkus, just to talk about that briefly, so it is using all the regular enterprise stuff that we have seen over the couple of years, like JAX, RS, I don't know, micro-profile injection, et cetera, all these stuff. And here, I have a small REST application that is, or not the REST application, but web application, sorry, that handles a POST request. And it creates an application, JSON. And it's a blockchain one. So it should be kind of familiar, if you know already, if you have developed a little bit of Java, especially Java, web application of Java. So here, I will actually go to my demo. The demo is pretty simple. It is one of the pots that I mentioned before. So I have a message producer here, which creates really a lot of events, messages, and it sends them to a Kafka topic. And my Kafka source will receive events or fetch or consume events from that Kafka topic. And it will send it to the sync. But what we will see is that the pots will, we will have, like, 10, 20 pots. And once the events are done, once the message producer is done sending events, these pots will delete. So the application will auto-scale to zero. This is the address for the demo, like it had repository for the demo. Okay. So this is the message generator. It's not a Java application. It's just sending events to Kafka. So I just wrote a small node application. In this case, in this configuration for the demo, it will actually send 20,000 messages to a special topic called Kafka source demo. And this is my Kafka cluster. It's at this address. I will not run the message generator directly in my machine, like not locally, but I will, I already created an image. So I will create a job, like Kubernetes job, batch job for this image. And it will run there. So nothing really fancy here, just send 20,000 messages to this topic. The more interesting part is the sync. So the sync here is, let me go back to, so this is the message generator. This is the Kafka itself. This is already deployed on my Kubernetes cluster. Kafka source I will use. It's out of, working out of the box. I don't write code for that. And this is the sync. This is, again, my custom application. So I have this one and this one. The sync is the Quarkus application. Okay. So it is a, maybe I should do that. Or I shouldn't. Anyway. So it's a very simple application. It just receives a post request and it generates, it produces a JSON output. What it does is it receives the event. It just iterates over the headers. It writes the headers, then it writes the body. That's all. Okay. Let me start with with creating the namespace. So this is the sync. I've already built the image for this with Graal VM support. So it will run quite fast. And here in the config folder, you can see the resources I will create. So the namespace. I just created the namespace called my namespace. And I will create everything in that namespace. So this is my sync. So this is the Quarkus application that will receive the events. And I deploy it as a K-native service. And this is my image. So these are some fine-tuning parameters for making sure the auto-scaling works out of the box. But these are for dramatic purposes. So in reality, if I don't define these and just do everything by default, the Quarkus application is consuming messages so fast that you don't see any pop skating up. So I did a little bit of cheating here. But in a real-world application, you would probably don't need target utilization 10%. You don't need this. This is just too low. But in the real world, you will have a bunch of clusters and stuff like that where you'll have a lot of events. Then you will see the service is scaled up. This is my sync. And this is the Kafka topic. So I just use StrimZ operator. And I just create this Kafka topic. And this is my Kafka source. So what Kafka source does is it receives messages from this topic, Kafka source demo topic, which I created earlier. And it will send it to its sync, which is the sync I created earlier. Oh, well, okay. I haven't actually created them. Let me do that. Sync is created. Topic is created. Source is created. Yeah. So this sync is the, sorry, this sync preference is the k-native service that I created before. Okay. So I have two things. I have the Kafka source pod when I check the pods in my new namespace. And I have this, I can see the sync. Yeah. So the sync is running. It's only one pod. And in one or two minutes, I think it will scale down to zero because there is no requests coming yet. Although Kafka source is here. Yeah. It's already terminating. It's terminating because Kafka source is not seeing anything in the topic so that it's not receiving, it's not consuming anything from the topic and it's not sending anything to the sync. But when I create the message generator, finally, I will see lots of activity in the sync. I will see lots of sync pods created. And I will also see something in these sync pods logs. So yeah. I'll start watching the logs for the sync containers. Okay. Now it's still terminating but it will be finished soon. Yeah. Here I will start watching my message generator pod. And here I'm watching the pods like to see the number of pods that it will auto scale up to. And now when I, when I create my message generator. Yeah. So it is sending messages. And I can see it's already, I don't know, how much, eight or 10 pods. And this part is, this is the sync log. So it goes, it iterates over the headers and then it also writes the body somewhere. Yeah. Here. Oh, okay. It just writes object, object. So now image sending, event sending is done. The message generator sent 20,000 events already. And Canadian eventing serving, sorry, scale my service to eight pods. Yeah. Eight pods. And after one or two minutes, we will see that these are all deleted because there is no requests coming for the sync. We can check that later. Yeah. Okay. It's already terminating. Yeah. It's the time is pretty short to scale to zero. Okay. So this is basically kind of my demo. So the takeaways here are gain native provides better workload management on top of Kubernetes. It allows things to scale dynamically and including to zero. But also, there are some nice advanced networking things that are supported, all these traffic splitting and stuff like that that I haven't shown. So, and, and Quarkus is like, kind of makes a lot of sense to use in Knative context. Because we have super fast startup time, we have minimal resource usage. And it is great because you just use your existing Java knowledge, you just leverage your existing Java ecosystem knowledge. So that's all for my talk. And I can have a look at the questions. How can we create filters for content based routing? In Camel, there is a DSL where you can apply and kind of apply any kind of filter. Is there a custom resource for that? So where was the broker? Yeah, yeah. So this filter here, which is inside the trigger. So if you check a filter or trigger object, it's like the YAML representation for it. You'll see a filter field and hit there, you can define a bunch of stuff. So obviously, you can do stuff like the event types, because every source is creating a cloud with a specific type, like GitHub is creating, GitHub source is creating like type or GitHub, I don't know, GitHub event, something like that. So you can use that in your filter. This is the most basic one. And there is also something called cloud SQL filtering, which is still being improved. Or well, actually, it is already work. It's already integrated. But there was, there were in the upstream discussions about adding it as a, what's it called, experimental feature or what so, but there will be more advanced filtering options in the future. And that is kind of, yeah, that's a DSL, basically, that's exactly a DSL. It's, you can write a things in a SQL like language, you can write your filters like that.