 it to have you here today, today's webinar. It's all about Red Hat OpenShift Connectors. So let's get ready to be hands-on on this exercise. You will be able to try our new services, not only Connectors for also Red Hat OpenShift Streams for Apache Kafka, which is our managed Kafka service. I'm going to share with you two links that you're gonna need as we move through the presentation. Here they are. And I'm just gonna get started with the agenda. So welcome everyone, let's join. So today we are gonna run to a quick product introduction. As soon as we finish that, I'll explain to you the details of the workshop and then you're gonna have your guided lap time with my friend and colleague, Bernardo Tizon. And then we are ready for Q&As. You can always copy your Q&As in the chat. So let's get started with Red Hat OpenShift Connectors. For those of you that are maybe new to our presentations, one of the things that Red Hat is trying to do lately is to expand its open hybrid cloud technology portfolio with a new set of managed cloud services. And these ones include platform services, application services and platform services. And for today, what we're gonna look is into the layer of application and data services specific for Red Hat OpenShift Connectors. All these services provide full stack management, unified experience and support across hybrid clouds. Everything is natively integrated and running on top of OpenShift. With the goal that helps you, which is the most important part, build applications for your organization. So the ecosystem that Red Hat has been building in the last few years, mostly in the last year, is that our goal is to support the creation and delivery of a stream based applications that requires a robust streaming platform that can support a variety of use cases. And one of the things that we have chosen is to have Red Hat OpenShift Streams for Apache Kafka in the center of our product ecosystem. And even though Apache Kafka gives us a lot of the important features for streaming data, it doesn't necessarily serve all the functions and it cannot do it all by itself. And that's when we need to tap into the product ecosystem that is out there for Apache Kafka, that basically provides us additional solutions and tools that help us to complete a streaming analytic solution or to complement a real time application. So among those services, we have service registry for schema discovery. We have connectors for connecting between sources and syncs, which is something that we're gonna talk about today. We have API designer to support the creation of APIs and schema. And of course, all of these can be done in combination with OpenShift to provide an environment for development and deployment of cloud native applications, okay? So a couple of things that we have in this ecosystem to keep in mind is that we strive to offer a streamlined developer experience. We try to integrate the platform with the services. We provide a shared identity management and access controls. And we provide 24 seven premium support with a 99.95% SLA. So what you're here today is connectors. So what is Red Hat OpenShift connectors? And these are as a fully hosted and managed preview connectors for Red Hat OpenShift streams for Apache Kafka that improve the time to market and reduces the complexity of streaming data between systems across hybrid cloud environments. Basically right now we're over in after 60 preview source and sync connectors that support multiple standards. We deliver the solution as a fully managed solution expanding the Red Hat cloud services offering and it's tightly integrated with OpenShift streams for Apache Kafka. Here are some of the benefits or the features that we have for connectors. I don't wanna deep dive in all of them because we wanna spend our time using the product rather than reading a lot of the text. But basically a couple of things to keep in mind. We offer over 60 preview connectors for some source and sync projects. They both are based on our division project and our camelcape project. We also offer error handling making sure that we can support all the errors by stopping on error, logging on the error or sending to that letter Q. And finally, all of these all these connectors are already pre-built. We give you a very user-friendly code free UI that allows you to basically create your connector and deploy it as well as updating the configurations of each one of them. So you can use it without using any code, okay? So what happens? So basically our Red Hat OpenShift connectors product is available today on console.redhat.com And what we are offering to you is the ability to try the service. You can self-serve, sign in into console, clean code connectors, and you are gonna be able to create a connection. You can actually stream data to and from Kafka, okay? This service or this solution is available for 48 hours. So you can try this, you can get access to this environment for 48 hours. The only limitations that you have in these trials is that you can only deploy four connectors at a time and you need to provision a Kafka instance to be able to connect the, you know, do the connections and being able to stream the data. And I'm sure you'll learn more about this as Bernard walks you through the process. It's very simple and easy. So what are the details of our workshop for today? So the first thing you are gonna do is that you are or need to provision a managed Kafka instance. The second thing is that you're gonna be able to build and provision the connector of choice. And finally, you will connect your data source to your Kafka topic. So those are the goals that we are gonna achieve in this session. What do you need to make this happen? First, you need to have a Red Hat account and so have your credentials handy to make sure that you can deploy all these environments. So you can go to cloud.redhat.com or console.redhat.com and sign up for an account there that will give you access to all our trials and services. You need a managed Kafka instance, as I just mentioned, you need to provision one dedicated for you and then you need to follow the step-by-step instructions which were shared on the documents that with the links I shared at the beginning of the call, which I'm gonna go ahead and copy again. So you can enjoy them. That's the first one and this is the second one. The link is this one here if you actually prefer to write it instead of copy it from the comments, okay? This will give you access to a guide which is the same one we send you through the email. And if you go to section four, you will find a link to the workshops step-by-step instructions that Bernard is gonna walk you through. So without further ado, let's get started with Bernard and our presentation for today so you can get your hands on the product. Hey Bernard, welcome. Hey, thank you, Jennifer. So let me share my screen. Share screen, oh, yeah, share screen. Yes, you should see a kind of empty Brab's window, right? Okay. That's correct, yes, I can see it. Yeah, okay. So what are we gonna do? What Jennifer already explained it. So every detail, all the details are in this workshop guide which is publicly available through GitHub. So generally we encourage people to try to follow along as I go through the instructions. We also realize that sometimes this is difficult if you only have one screen, for instance, and things like that. So don't worry about this, you can redo that workshop whenever you want, okay? So it's not a one-time thing, so you can do it whatever you want. So basically what I want to quickly reiterate on is the structure or the general architecture of connectors where you have a data source which will generate events. Those will be picked up by a connector which will stream those events to a topic on OpenShare Streams for Apache Kafka and then you can have like a sync connector that will actually listen to that topic, get the events and potentially send them to a data sync. Now, we need to do a couple of things. You need to provision the Kafka instance. We need to kind of do some configuration on that and then we can start creating or connector. So let's start with this. So I'm gonna do that out of the way. So we start with going to consolesdreadhead.com. I'm already logged in, but generally you have to log in with your Red Hat account ID. You go to application and data services, streams for Apache Kafka. Kafka instances. I don't have any Kafka instances for the moment. So that's the first thing I need to do is create a Kafka instance. So I'm gonna do that. This is gonna bring you to this wizard. Well, it's only one page. And you see here the kind of terms. So you get a Kafka instance for 48 hours. So after 48 hours, the instance disappeared, but then you can create another one if you want. It needs a name. So we're talking about connectors today. So I'm gonna call it connectors. There is no region I can choose. It's US East anyway on AWS, single availability zone for the trials. Streaming units is for if you're a real, you have like a real Kafka instance, not the trial version. So I can immediately do create instance. And this will start provisioning my instance. That should take a couple of minutes, hopefully not too long. But in the meanwhile, we can do something else and that we need in order to access our Kafka instances, we need a service account. So in the same menu here on the left, you see here service account. So we need to create a service account. I don't have any yet. So I can do create. I'm gonna call it connectors as well, create. And then this will give me a client ID and a secret. And it's important that you copy these to a secure location because especially the secret, you won't be able to recover that one. So I'm gonna copy those client ID. I'm gonna paste that somewhere. And I'm gonna do the same with my client secrets. So that I have a mandate. I have copy those. So I can close that window. And now I have a service account that I will be able to use with my connectors and with my Kafka instance. Let's go back to my Kafka instance. Let's see how we are doing here. We're still in progress. I said that might take a couple of minutes. So in the meantime, we can quickly go over what actually we're gonna do today. So we're gonna do two things. So we're doing this now at the moment, provisioning the Kafka instance. We already have our service account. Next step will be create the access control list for a service account. So that's the configure part here. And then we're gonna do two things. We're gonna do like a getting started if you want. So create a very simple source connector that will just generate messages by itself. And then we're gonna have also a sync connector that is gonna send that to a HTTP endpoint. So we're gonna have like a very simple data pipeline with the connector that generate messages and then they end up in a HTTP sync. This is just for the getting started experience. And then we're gonna do something more sophisticated if you want. So one of the kind of connectors that we have make use of allow for change data capture using the very popular Debezium project. So we can have a change data capture connector that will actually capture data change event from a MongoDB Atlas instance and stream that into or Kafka instance. So that will be the second part of what we're gonna do this afternoon. Okay. So let's see where we are with here. Still, we should almost be there. Let me quickly refresh my screen. Sometimes it doesn't, no, it's still creating. So we will have to wait until it comes in ready state before we can continue with it and define our access control list for service account for this Kafka instance. So four minutes we should nearly be there. So be with me. Okay. We're ready. So that means that we can start using all connectors. So it took a little bit more I think than four minutes which is still pretty reasonable. I think considering that it really deploys a full-fledged Kafka broker on the cloud. So what I can do now is go here to click on my Kafka instance. We have a number of tabs here. The one that I'm interested here for the moment is the access of the fourth tab where I can define access control rules for my Kafka instance. Normally the default rules should appear now. So they are pretty restricted at the moment. So all accounts can describe my Kafka instance. They can describe consumer group, describe topics and that's about it. We need more because obviously our connectors needs to be able to produce the topics, consume from topics and the Debezem connectors even have need the ability to create new topics. So we need to add those ACLs, those privileges. If I click on manage access, I can select my, I can do that for all accounts but let's do that now for my service account that I just created. So we just called connectors. And then I can assign permissions by clicking on the add permission dropdown. And I can do that individually but we have some, what we call task-based permissions which will kind of do several things at once. So let's start with the consume from topic. I want to consume from all topics. So that means I can put that to is and then the name I can use star which is a wildcard. So that means that my service account will be able to read from all topics. And same with consumer groups it will be able to use all consumer groups read from all consumer groups. And then I can add another one that I will need that's produced to a topic. Okay, so again, we will do that for all topics. And you see here that by setting this ACL my service account can write and create topics. Okay, so that's exactly what I need. So I can do save now and then my access control list will be updated with my privileges. So what you can see here, okay? So that's what we needed to do on the Kafka site. If I bring my guide back what we did, well, I didn't create a red account but I did provision the Kafka instance and I did configure it to so that it can be used with my OpenShift connector which consists of what you see here setting up my, oh, I still need to create the topic. I should forget that. So let's do that as well. So I will need a topic for my first connector. So I can do that. I'm still here on my Kafka instance. If I click on the topics tab, you will see that I do not have a topic. So I can create a topic and let's call that best connectors. That's the name of my topic. The rest I can leave as is. So one partition is definitely fine for getting started and the retention time is also fine a week that's longer than your cluster will live anyway and unlimited retention sites. So we can keep all this. The replicas is not something I can change at the moment. So if I do finish, I will have my topic ready to be used when I create my first connector, okay? So that's what I need on the streams for Kafka site. So now we can turn to the connectors themselves. So you have here this connectors menu and then connectors instance. If I click that, first of all, it warns me that I go to a beta environment. The whole connector thing is still in service preview. So it's considered still beta. So yes, I wanna use the feature in beta. So that will change here the URL, but for the rest everything will work as expected. And now I can start creating instances. So let's start with a source connector. So I have a source connector. So we have a lot of connectors. So Jennifer already told you we have like over 60. So a lot of them have to do with cloud services, things that are provided by Amazon, things that are provided by Azure. We will do Azure here. You will see some Azure services that you can use as things or as sources. So AWS, there are some Google ones as well. Right, and then there are also the vision connectors. And one of those we're gonna use in a second. So you see all those as well. The one that we are gonna use is actually very simple one. And that's the data generator, which does not actually connect to a hosted service like AWS Kinesis or SQS or whatever. But it's a simple way that we can get started with connectors without having to set up or a cloud service first, which would definitely take way too much time to do that in the time that we have here. So the data generator connectors just generate at a fixed interval, very simple messages. So we can use that one. So if you search for data, you will find the data generator source. Click on that one. So that will select the connector type. Then you need to create to select your Kafka instance that you wanna use. So the one that you created before should already be here. So it was called connectors. So we're gonna use this one. And then we need a namespace to actually deploy or connect. So as part of the service preview, you can create what we call a preview namespace, which is actually a namespace on a OpenShift cluster that is completely managed by Red Hat and which will stay up or available for 48 hours as well. And you can create up to four connectors in that preview namespace. So I don't have any at the moment. So I need to create a preview namespace. So you see here, they expire after 48 hours. They get like a generated names, not so important. So you click create. And that namespace will be provisioned. So it's actually really creating a namespace on a OpenShift cluster. So that will take a couple of seconds. And then it should go to the connected state. And then we can start using it. That's normally fairly quick. Yes, there you go. So I can select it now. Again, it warns you that it will disappear like in 70 hours, 59 minutes. If I do next, I can now start configuring my connector. So to configure connector, there are a number of things that you will always have to do. So your connector needs a name. So let's call that one data generator. Okay, it needs a service account to actually connect to your Kafka instance. So I created the service account before, as you have seen. So I can copy the client ID of my service account and the secret of my service account. Copy before. There you go. And I can continue now. And then we come to the connector specific configuration, which for the data generator is fairly simple. So the data shape is not something that we can change at the moment. So we leave that to the default. You need to tell the connector from which topic he wants to produce. So the name of my topic was test connectors. The content type, I'm gonna leave that to text plain. And the content of my message, let's take Hello World. Let's keep it simple here. So I'm gonna send a Hello World message in pure text format. And I'm gonna do that every 10 seconds. So every 10 seconds, this connector is gonna send a Hello World message to my test connectors. The reason why I changed that to 10 seconds is because the HTTP sync that we're gonna use, it's kind of throttled. It's a public HTTP endpoint which is throttled. So if I take the frequency like too low too fast, you will very quickly end in throttling and your connectors, your sync connector will go in a failed state which I want to avoid for this example. So 10 seconds is fine. Okay, so I do next here. And then I can configure the error handling procedure that I want to configure for my connector. So we can actually choose between three. You can log if there are errors, but when you use a preview namespace, at the moment you don't have access to the log. So it's not very useful in the service preview state because as I said, you can't see the log. So you can stop the connector in case you encounter errors or you can set up a debt letter queue that, and so messages that generate errors will end up in the debt letter queue but that would need for me to create another topic on my stream for Kafka which I'm not gonna do at the moment. So I just gonna choose the stop options which is also the default. So if I have myths configured my connector from one reason or another, it will just stop, okay? And then I can do next. And then you see here the, an overview of what I configured and I can now do create connector. There we go. And now my connector is being deployed in my preview namespace. So this will take also a couple of seconds. Generally it's pretty fast. So it needs once it's in the ready state, it will start in this case, oh, there we are ready. So now I expect my connector to start generating messages at the interval that are specified and that is every 10 seconds. Now it would be fine if we can easily verify that and we can easily verify that. So if we go back here to my Kafka instance here, okay? I go to connectors. I go to topics. I have my test connectors topic. If I click on that one, I can see here the step messages and this will show me the messages that are being produced or that actually are produced in that topic. So if I click that, I expect to see, oh, yes, well, whoa. My screen is not very stable here. Let's see if refreshing helps. Just, yeah, whoa, whoa, whoa, whoa. Okay, I need to go back here. Yeah, okay. So you see here that already have like eight, maybe already more messages. So it only shows the last 10 messages but I can figure that. But you see here a number of messages which have the value hello world and roughly they are being produced every 10 seconds. So we see 36, 46, 56, 06 from the next minute. So every 10 seconds a message is being produced to my topic. So my source connector is working as expected, okay? So now we can move to the other part of this first, this first getting started and create a sync connector to actually consume those messages and do something with it. So let's go back to connectors. Connectors instances and I can create a new connector. So remember I can go up to four in the service preview. So we're still within our quota. I'm gonna search for the HTTP sync connector which is actually gonna produce, consume from my topic and call a HTTP endpoint for every messages, every message it encounters in my topic. So I select this one. I still obviously will use my same Kafka instance. I will use my same namespace. Now I need to do my initial configuration. So I will give it another name. I will reuse the same service account that you could potentially use another one. So you can have, I believe up to 50 service accounts per organization. So if you wanna be a little bit more sophisticated, you can have service accounts for very specific purposes. Here I'm gonna reuse to one that I configured. And then I need obviously now to create my connector and the most important thing here, well, there are two important things. I'm gonna start with this. So I'm gonna consume from the S connectors and now I need the URL for my HTTP endpoint. And for that, I'm gonna switch steps here. I'm gonna use a free service that you can find if you go to the website that is called webhook.site. You will receive a unique URL that will actually act as a HTTP endpoint. So we can use that one to actually send our messages to. So if I copy that unique URL, and I go back to my connector and I post it here. And I click next. Again, error handling, I will keep it to stop. Go next, I can review everything here but I'm pretty sure everything looks okay. So I can create my connector. This one obviously needs to be deployed as well. I'm gonna take a couple of seconds. Okay, okay. There we go. My HTTP sync is in ready state. So ready also means that it's at least working as expected until now. Otherwise it would go in error state and it would have probably to fix my configuration but ready means everything's okay. So what I actually do expect is that I receive messages in my HTTP endpoint and indeed. So you don't see all my previous messages because the connector set up that it starts consuming from the last available message in the topic since it was created. But once the connector is done, you will see every 10 seconds that I have a message with the hello world content. So my very simple data pipeline is now complete. So I have a source connector that produces messages to my Kafka instance. I have a sync connector that consumes those messages and sends that to an HTTP endpoint which just echoes the request. So not very useful in real life, I would say, but let's say good enough to actually demonstrate a very simple end-to-end pipeline using OpenShift connectors and Kafka streams. So this is just gonna continue for until I stop those connectors. So that was the first thing I wanted to show. Okay, so we still have plenty of time. So now let's do something more interesting maybe or at least something more different. And the next thing I wanna do or show you is how you can do change data capture with or connectors. So change data capture is a technology through which you can capture changes in the database, stream those as events and have other systems consume those changes. And then typical use cases for this is data replication between data sources or in microservices architecture, building local views of data services or of database that are owned by a particular microservice, et cetera, et cetera. So we have, and let's me go back to connectors just some of our connectors are built upon division and allow to actually do this change data capture thing. So what I'm gonna do here is actually set up a change data capture connector to a MongoDB Atlas database. Before I do that, and that's probably something I have done it already. So if you don't, it will be a bit hard to actually follow with me long and doing all the things but you can do that afterwards. So when you can sign up with MongoDB. So if you go to cloud MongoDB, you can log in or if you have already an account you can sign in and you can log in. If you don't, you can sign in with friends with Google account or create an account on Mongo, a Mongo Atlas, and then you can create a database. They have a free tier. So no strings attached. You don't even don't have to give a credit card number. You get like a free shared database with limitations obviously, but good enough for let's say a getting started experience for what you wanna show here. I have already done that. So I have my database deployed. So it's running on the cloud, a cloud managed by Mongo. The important thing, if you want to do it again is you wanna do it as well. There are two things, is network access. You have to make sure that your MongoDB instance is accessible from everywhere because you actually don't know the IP address on which your connector is running. So we need to be able to connect from everywhere. So that means setting a access policy for zero dot zero dot zero dot zero slash zero which is like everything. Once you have done this quick start, you might want to delete that rule if you don't want to expose your MongoDB to the whole world. And also you need a database user. So those steps are explained in the detailed guide. So I have like a user. So we'll say with a password because this is what my division connector will need to actually connect to my MongoDB. So I have set that up so that I can use that. So let's go back to my connectors and let's get started here. So create connectors instance. So now I wanna do something with Mongo. So if I search for Mongo, we will see a number of connectors. We have actually two versions of the division Mongo one and older one that should disappear fairly quickly. At the next rollout, which we obviously gonna use the latest version here. So that's a division source connector that with which we will be able to change the data captured to my Mongo database. Gonna select that one. Same thing here, select my Kafka cluster. Select my preview name space. Okay, let's go that one, Mongo Atlas. Same kind ID and secret. And I used before. Good. And now some connector specific information. So there are a number of things that we need here. So we need the hosts of my Mongo. We need the name space. Now I know that name space is a word that you see everywhere in this context. Name space is actually a alias for my Mongo instance and then username and the password with which I can connect to my MongoDB. So the hosts, something that I can find as part of my, if I go back to my Mongo here, I click here on cluster zero. You will see like three links. Okay, so because my MongoDB is actually a, Mongo has a notion of shards. So I have like three shards and actually the host is a concatenation of those three shards. So if I click on the first one, you will see this is actually one host address. And my two other shards, they have the same one. Well, not exactly the same one. So they will have another shard number here. So I need the three of those and connected. So separated by commas. So I've prepared that already here. So let me paste that here. So my hosts is actually a concatenation of my three shards, okay? Shard number two, shard number one, shard number zero. We need a namespace. Let's call that, let's call that Mongo Atlas. I'm gonna get it movies because we're gonna use movies to actually create a connection on my MongoDB and do change data capture on that collection. And then I need the user, which I created previously on my Mongo. So I might use this and my password. I'm gonna paste that here, this one. And then very important, don't forget it. If you use like this hosted Mongo, this uses SSL to connect to MongoDB. So you have to have this SSL connection enabled or things will not work as expected, okay? So host a unique name here username password and the enable SSL. And then I can basically filter. So one MongoDB can have, Mongo has not the notion of tables. It rather has a notion of databases and collections. So you can filter on collections. So basically include or exclude collections that you want to include in your change data capture. So I can do that here. The difference is I'm gonna have a collection code called moviesDB. So contain data about movies. And then in that collection, in that database, I will have a collection movies. And that's actually the one I'm interested in. So I think I could do just a collection and that would work as well, but let's say that I filter on the database moviesDB and I filter on the movies collection in that database. I have include here already selected. So I can now apply that filter. So that means I'm gonna only watch that particular collection. Okay, I can do the next now. Here that's like more advanced properties which generally are just here for informational purposes. You rarely have to change those things so you can just click next. And then you come to the overview screen where you see your connected instance, the Mongo thing and then the Mongo specific information. So now I can do create connector and this will create my Mongo connector. Now this will take a little bit more time to get in ready state and the reason is fairly simple. So most of the connectors that we use or that you can use with object connectors are based on Camel K, which is itself based on the Camel integration framework, but the Debesion connectors are actually connectors that run on Kafka Connect. So when we deploy a Debesion connector under the hood, there is first a Kafka Connect instance that needs to be deployed and on top of that the proper connector. So that takes a little bit more time. That should still be fairly quick. In the meantime, one thing that we can see to potentially see the progress that we make here is that Kafka Connect needs a number of topics to maintain its internal state. So that's why that's one of the reasons we needed create topic privileges for my service account. So if I go to my stream share package Kafka and my Kafka instance, and then I go to my topics. Okay, they're already being created. So you see here like three additional topics. Those were created by the Kafka Connect on which my Debesion connector will be deployed. So normally everything goes okay. You should be able to see like a couple of messages here. So yeah, so in the status one, in case you're interested, if you see like one that says running, so that probably means that my connector is running as expected. So let's see if that's correct. Indeed my Mongo Atlas connector is now running and is monitoring my movies DB, movies collection on Mongo, which does not exist yet. So we need to create that one. Okay, so I'm going back to my Mongo database. If I go here to, here you will see something that that called collections and it says you don't have any collections. So I can load a sample data set. So provided by Mongo itself, but that's a pretty extensive day data set. So that will would take too long. So we can create our own data. So I'm gonna create a database called movies DB, which matches the name that I configured on my connector and a collection called movies, right? So which also matches the name I created for my filter on my connectors. So if I create that, I will have a empty collection. So now we can insert documents in it. So Mongo has like the notion doesn't talk about rows and columns and stuff like that, but has a notion of documents, which are typically kind of JSON structures. So there are different ways to add documents into a collection for the sake of simplicity. We can do it through the Mongo DB UI, or at least the Atlas UI, which gives me kind of the wizard to do that. So if I click on this first icon here with the two with the, with the square, not the square curled brackets, I can just paste a JSON structure and I've prepared some data structures here, which describe a movie. So if I place that one, you will see that I have a document with a title. So this is about the Mary Poppins Returns movie. So we have a title, a cast, some genres and the year it was released. So I can insert that one. There you go. So now I have one document in my Mongo collection. And now I expect that my Debezian connector picked that one up and created a data change event and streamed that to Streams for Apache Kafka. So let's see if this is actually what happened. So let's go back to my Streams for Apache Kafka, Kafka instances, my connectors instances. I have a new topic here that was created by my connector, which is called the name of my MongoDB namespace of Mongo Atlas movies, name of my database, name of my collection. So that's the name of a topic that my connector generated and I expect one message in there. And hooray, I've got a message. And that message, if I look here at the contents, has like a number of things. So basically this is what we call a change event. Well, maybe I can copy that. I will put that into a text document so that it becomes a little bit more visible. So if you see my screen here, so you have like a number of things here. So you have an after state. So that's actually the state of my documents. So you will see here the title, Mary Poppins, the year, the cost, et cetera, et cetera. Number of other, more metadata around my source and then also very important here. The op stands for operation and the op is a C. So that means create. So actually that change event describes a create operation. I created a new document in my Mongo database that was captured by my connector. And the whole change event is now streamed into my Kafka topic. Okay? So if I create another one, I have prepared some of those here in front. Okay, let's go back to my Mongo. Insert the document. That's another movie. It's a common, let's insert that. Okay, and I have a third one, which is the mule. A nice Clint Eastwood movie. Whoa, whoa, whoa, what am I doing? Okay, so I insert that one as well. So I have now three documents. I expect three messages in my topic. So if I refresh that thing, indeed I have three and all of them. Let's look at the last one. You see the operation is still C, create. Okay, now I can change a document in my Mongo. So let's do that. So one of the things I always wanted in is play in the movie. So let's, because I can't really do that, I can add myself to the cast of one. Okay, so I can add a array element to cast and I will add myself to it. Okay, so I add myself to the cast. I can update my document. There you go. So that would normally, I hope, have generated another message in my topic. Yes, indeed, that's this one. So it's still about the mule, but the things now that you see the operation is U for update because I updated an existing document and you see here that I'm part of the cast now. Okay, so because that's the change that I did. So you can add documents, change documents, delete documents if you want. This will all be captured by your connector streamed into a topic on StreamShare Apache Kafka. And obviously from there, you would typically consume those change events and do something useful with it, maybe update another system or whatever. So I'm about to register a video where I actually have like a full pipeline where those change events are consumed by AWS Kinesis Service, trigger Lambda, and then update in Elasticsearch index. So stay tuned. So for when I release that video it will be over the next couple of days. So that gives them, but that would bring us the far to do it here. But so what you see here is just the first part you consume change events and then obviously you wanna do something useful with it. Okay, and that kind of concludes the workshop. We are still nicely in time. So just to what we did, we created source connectors, we created a sync connector and basically demonstrated how you can consume data from data sources and then build pipelines to data things. All this without having to install anything on your local system. It's all running in the cloud. So we can actually build potentially very interesting functionalities which are completely running in the cloud. So no infrastructure burden as far as you are concerned. So that's it as far as I'm concerned. So thank you for attending. Thank you for following along. If you managed to follow along, if not, as I said, the instructions should be detailed enough to just do that at your own pace today or later when you see fit. So you can create Kafka instances and connections whenever you want. So that preview service remains available. That's it for me. Yeah, I wanted to show something Bernard just to make sure that our audience knows. It's just a quick key here. So we can close with this. Remember that you can always go. Christian, if you can, thank you. Remember you can always go to console.redhat.com to try the service. This is free and available. You have 48 hours trials and you can practice as many times as you can. Remember it's the same for our OpenShift streams for Apache Kafka. We run these webinars or we try to run these webinars on a monthly basis. On the YouTube channel for developers there's a good amount of recordings of us doing these over and over again for Kafka, for service registry. We have a nice case for Kafka streams on processing. So go check it out. There's a lot of information in there. You can always find us, LinkedIn, Twitter and try to connect to us with the DevNation. There's a channel on DevNation for developers. Make sure you find us there and thank you so much for coming and what a wonderful job Bernard. This looks great. Thank you. Thanks everyone. Okay. Thank you.