 Hi, I'm Karan Singh, Senior Solution Architect at Red Hat. In this video, I will walk you through a modular reusable pattern that we have developed as a part of Data Engineering Jumpstart Library. This pattern is about producing Kafka event messages from machine learning inference output. To understand this better, let us assume a scenario. We have a Kafka producer app called as the Generator. This randomly picks up car images from our data set hosted on OpenShift Data Foundation S3 bucket. The generator makes a post-API call to the ML model and summits the car image as the payload for inferencing. The ML model reads the image and detects the car license plate number and returns that to the generator application. The generator application then enriches this output by adding custom metadata like timestamp, geolocation, station ID, et cetera. Finally, the Kafka producer that is implemented inside the generator application writes the JSON message to the Kafka topic and triggers an event. This message will then later be consumed by Kafka consumer for further processing. Let's see how it's implemented. So this is our generator application that begins by initializing some variables like endpoint to the S3 object store, access key, secret key, endpoint to the Kafka service, and the Kafka topic name, as well as the machine learning model inferencing endpoint. The generator application randomly chooses an image from S3 bucket and summits that image to the ML model service. The ML model service responses back with the number plate of the car. The generator function then enriches the output of ML model by some additional metadata fields like these. Finally, it opens up a connection with Kafka and then sends this message to the Kafka topic and hence generating an event on Kafka. Let us now see how this event looks like in real time. This interface on which we can randomly check the messages coming up on Kafka topic. This is an example message that mentions that the detection was successful and the car license plate number is this and the generator function has added few more data points to my original output from ML service. This is how you can consume this pattern in your own use case where your application can contact another application or ML inferencing service, enrich the output as you needed and submit that to a Kafka event message. This makes your application event-driven and loosely coupled and hence making it scalable.