 Hi, today I want to show you how you can use Red Hat OpenShift Streams for Apache Kafka together with Kafka Connect and Debezium to implement the Outbox pattern. Red Hat OpenShift Streams for Apache Kafka is a fully hosted and managed Kafka service for stream-based applications. Debezium is part of the Red Hat integration portfolio and is a platform for change data capture. So that means that in a nutshell, with Debezium you can monitor the transaction logs of a database and then react on insert updates and deletes. Debezium itself is deployed as a connector on top of Kafka Connect and Kafka Connect is a tool from the Kafka ecosystem which allows you to hook up your Kafka broker directly to external systems. Those systems can be data sources or data syncs. So what you see here is that events can be captured by Kafka Connect from a third-party system, transferred into Kafka messages and on the other sides Kafka Connect can consume those messages and push them to a sync system. So Debezium is used as a source Kafka connector so it will monitor transaction logs of a database and upon insert update delete events will capture that event and transform it into a Kafka message. The use case that we're going to see with Debezium today is a bit particular and it's called the Outbox Pattern and the Outbox Pattern provides a solution for the problem of dual rights with Kafka and for instance database. So to illustrate this you have here a service, an application, it's called the Order Service and as part of its functionality when a new order comes in the order needs to be persisted in a database and a Kafka message has to be sent out for consumption by other services like the customer service and the shipment service. The issue is that you cannot combine both the database transaction and the writing of the Kafka message as one atomic operation so that means that when things go wrong you can end up with an inconsistent system. So let's say that you send out the order service sends out the Kafka message but then the persistence operations fails and it rolls back, your system is inconsistent and the other way around. So the Outbox Pattern provides a solution for that problem by allowing the order service not to send the Kafka message itself but rather persist the payload of the message together with some metadata in a separate table in the database. This table is called here the Outbox Table in the same transaction as the main persistence operations from, for instance, persisting the order. Debezium monitors the Outbox Table, detects that new rows are being created, transforms through a single message, transforms the change event into a proper Kafka message with the intended payload and sends that into the Kafka topic of interest so that it can be consumed by the customer or the shipment service. So the order service now persisting the order and sending the Kafka message is now an atomic operation because it's done in one database transaction and Debezium takes over from there. So to illustrate all this I made a small setup so I have my Kafka Broker, my OpenShift Stream for Apache Kafka instance already provisioned. It's happily running on the cloud and if you look at the topics, so you can see here the connection information. So with my Bootstrap server and then I have a service account that I'm going to use to actually connect to that Kafka instance. If you look at the topics, see here that here at the bottom, you see topic incident command, topic incident event. Those are two topics that belong to my little demo application and then you see your three topics that were automatically created by Kafka Connect. So that's my hosted Kafka instance. On my local OpenShift cluster, I have installed a number of things so I have my demo application which is called Incident Service which is a Quarkus application and which is actually part of a larger application which is the emergency response demo and the incident service is just a part of that. And basically what it does is when new incidents come into a REST call, it needs to persist the state of that incident and send out a Kafka message for consumption by other services in the emergency response demo. So an ideal candidate for the outbox pattern. Here you see my hosted Kafka cluster through a Kafka connection, custom resource and on the right you see here another Quarkus application, the Kafka consumer which is a very simple application that consumes messages from a particular topic and logs the contents of that message. Both the incident service and the Kafka consumer service are hooked up to my hosted Kafka broker through service binding. And then here we see our Kafka Connect installation with the vision on top of that and in which the vision is set up in such a way to monitor the outbox table of the PostgreSQL database. So now if a new incident is created, the incident service will persist a Kafka message in the outbox table. This will be picked up by the vision on Kafka Connect, sent to a topic on my hosted Kafka broker and from there the Kafka consumer application will consume it. Now before I actually demonstrate this principle, let me quickly dive into some details. Also on OpenShift both Kafka Connect and the division connector can actually be deployed as custom resources, which are managed by the stream Z operator. So if we look at here, we see here the piece of the custom resource for Kafka Connect. Where you see actually the Bootstrap server that this Kafka Connect is configured with corresponds to the Bootstrap server of my hosted Kafka instance. And you also see here that we use subtle plain authentication using the username and password from my service account where actually the password itself is kept in a secret that is referenced here. So this Kafka Connect instance is hooked up with my hosted Kafka broker. The vision itself is deployed on top of that Kafka Connect instance as a Kafka connector custom resource. And then you see here that it uses the postgres connector from division. And then you see here a lot of metadata about how to connect to that database, which tables to monitor. And here is only one that's the incident outbox table in which the incident service will persist Kafka messages. And then some other metadata about how to transform what is found in the incident outbox table to a proper Kafka message. That's the division part. Now to illustrate how this all works, I have here a simple UI that is part of the emergency response demo, which allows me to create incidents. So if I create an incident, if I hit the submit button, there is a rest call to the incident service. The data of the incident is persisted, as well as the Kafka message picked up by the division sent to my hosted Kafka instance and then consumed by the Kafka consumer application. To show this, I have here a terminal that actually is open to the console of my Kafka consumer pod. It's empty right now, but if I create an incident, we expect a Kafka message to show up here. And so let's see if that actually works. So I will create an incident, just one to start. I click submit. If I go now to my Kafka consumer terminal, my log, you will see that indeed I consume the message from the topic, topic incident event. It's the name of the topic, third partition, fifth offset. And then you see the headers of that message, you see the key of that message, and you see the actual value which corresponds to the state of an incident. And if you look carefully, you will see that this is not just a, well, it's a Kafka message, but it's a bit special message in the sense that it adheres to the cloud event specifications. So in the headers, you see a number here of cloud event specific headers. And this is because the whole emergency response demo standardized on cloud events as message exchange format. So it works. And to illustrate that it really works. Let's say that I can do like 10 incidents and let's say that I will create them with an interval of 100 milliseconds. So a new incident every 100 milliseconds. If I submit here, you will indeed see that 10 messages have been consumed by the Kafka consumer application. So this was a short demo to illustrate how you can combine OpenShift Streams for Apache Kafka together with Kafka Connect and DBSium to implement the Outbox pattern. Thanks for watching.