 I am Karan Singh, Senior Solution Architect at Red Hat. In this video, I will walk you through a modular reusable pattern that we have developed as a part of data engineering jumpstart library. This pattern is called Kafka to serverless, in which a Knative serverless function will get executed as soon as a new message is written to a Kafka topic. To understand this better, let us assume this scenario. Suppose you want to architect an event-driven order processing system. You have developed a Kafka producer application that generates cloud events, messages and ingest the data to Kafka topic. Knative Kafka event source detects this new message and with the help of Knative eventing, it triggers a Kafka serving serverless function, which does the order processing. However, a new order is generated in Kafka. This is how you can design an event-driven serverless application. Let us now see this practically. To begin with, clone jumpstart library GitHub repository and browse Kafka to serverless pattern. Next, to implement event-driven architecture on OpenShift, you must install and configure OpenShift container platform or OKD, Red Hat AMQ streams or Kafka, Red Hat OpenShift serverless or Knative and finally Knative instances for serving and Kafka event sourcing. In this environment, I have all of these tools pre-configured. Let's check Knative serving, Knative eventing and Knative Kafka. As you can see, all of these three instances are configured. Let us now check how Knative service look like. Knative service includes a specification in which you would provide the container image of your application that would like to get invoked as soon as a new event comes. Next, we will deploy the service on our cluster and we will verify if the deployment is correct. So this gritter service is now available on this endpoint. Next, we will create a Kafka Knative event source and in the YAML definition, we will provide Kafka Bootstrap server endpoints and topic name, as well as we will define a sync, which is our Knative serving serverless application that needs to be triggered as soon as a new event is generated. Let us now apply the CML file. At this point, our event-driven setup is completed. As soon as a new message is received in LPR Kafka topic, the gritter Knative serving will get executed. Let's test it by generating a few messages in LPR topic. I will use Kafka Cat as my Kafka producer to generate some messages. I will RSH into the Kafka Cat container and on a different shell window, I will take the logs of the gritter Knative serving so that we can see how new events are triggering the serverless function. And next, we will generate few messages on the Kafka LPR topic in the form of cloud event messages. So in the Kafka Cat shell, we will run this command, which will generate 50 Kafka messages that will print hello red hat. And here you go. As you can see in the other terminal window, the logs of Kafka serving, which is getting executed as a serverless function, once a new message is being generated and stored on Kafka. We can also see this in action in Opershift console. Head over to Opershift console and check out serverless. Here you will see the serving and eventing that we have created. We can also go to the developer view to look into how the pods are coming up. So you can see this is the gritter service and there's a Kafka event attached to it. Once we select the gritter service, it can tell us it's currently running a single pod, which is a serverless function from Knative. Thanks for watching.