 Welcome, we're very excited to have you here today. Today's webinar will guide you through a hands-on exercise so you can try our new cloud services, OpenShift Streams for Apache Kafka and OpenShift Service Registry. My colleague Bernard Tison will demonstrate how to set up his Kafka instance, how we can set up a service registry, how to use the standard specification for schemas and APIs and finally, you will get to transfer data between applications using the Kafka service or as a streaming service and the service registry as a schema registry. Before we get started, I wanted to share with you three links that will facilitate the work today. So you're gonna see those on the chat, okay? And so feel free to navigate to each one of those links and familiarize yourself with the content. Meanwhile, I'm gonna do a quick introduction for both services so you get kind of like basic information and then we're gonna get started, okay? So we're gonna talk a little bit about the Kafka ecosystem, super important for us. And before we dive in, I'm gonna cover something that's part of our vision in Red Hat and this is very important because Red Hat is expanding its open hybrid. Cloud technology portfolio and it's doing this with a new set of managed cloud services that include platform services, application services and data services. And the goal is to make sure that we're providing you good full stack management, unified a unified experience and also support across hybrid environments. In this diagram, if you see them from the button up, you can see that we have a lot of partnerships with the cloud providers outside and we have created specific offerings for managed OpenShift on the cloud provider of your choice. And on top of this, we're designing this new set of cloud services that natively integrate with OpenShift and deliver a streamlined developer experience. So you can do things in a much easier way related to build, deploy, manage and scale cloud native applications. So we are going to spend most of our time talking about the two services that are here on the middle office. So what is Red Hat OpenShift streams for Apache Kafka? And basically that's a fully hosted and managed Kafka service for stream based applications. The service has been designed for IT development teams that really wanna incorporate streaming data into applications. That way you can deliver experiences in real time or even deliver a stream based application. There are three benefits of using OpenShift streams for Apache Kafka for customers and those three are exposed here on the right side of your screen talking about how you can actually begin developing immediately. So you can deliver applications faster. The second one is that we're providing you a service that it doesn't matter where you have your applications you can connect to it. So it really helps you out to simplify, simplify the application life cycle across hybrid cloud environments. And finally we're investing a lot in something that we're calling the Kafka ecosystem because as you know Kafka as a technology by itself is not enough. You need a bunch of other services that can really help you out to put together the streaming platform or being able to create a stream based applications. So we're gonna talk about a lot of these today and you're gonna get your own Kafka instance and you're gonna do a bunch of cool things. The second service I wanna talk to you about today is Red Hat OpenShift Service Registry. And that one is a schema registry for Red Hat cloud services that makes it very easy for development teams to publish, discover and reuse artifacts. And when I talk about artifacts I talk about APIs and schemas. This service is offered as a multi-tenant service it's basically a SaaS service and it can help you out or serves as a central repository for API and schema registries. It offers, it's compatible with industry standards meaning that it has all this certifications and a standard formats that you can use. And then finally, these service supports, it's completely supported with Red Hat OpenShift streams for Apache Kafka but you can also use it with other services like OpenShift API Manage, okay? The service is fully hosted and managed by Red Hat and what that means is that we offer 24-7 premium support and a 99.95% SLA. Very quickly, I already mentioned these but let's mention it again, service registry provides full integration with Red Hat OpenShift streams for Apache Kafka. So it allows you to map your Kafka topics to the appropriate schemas, okay? So it helps you to do things like governance and centralization of all the schemas. It allows you to do the coupling because as you put some of the schema outside of your application, it helps you out to decouple that information from the application, making it more efficient and helping you to reduce costs. And finally, it provides visibility for you when you have a larger catalog of a schemas, maybe your developers have created a bunch of them and they're all stored on that service registry and it can support this co-operability of those. Not only that, it can help you out with serialization and deserialization as well as making sure that all the schemas and APIs that you have there are compatible. Finally, one last thing, names. So you're gonna hit, you saw that the names of our products are really long. For that reason, you're gonna hear us shorten some of the names as we talk about, especially Bernard. And we're gonna talk about like, instead of calling Red Hat OpenShift Streams for Apache Kafka, you will hear us say manage Kafka or just Kafka. In the case of service registry, the same, we will call it the registry. Sometimes service registry, so we're just trying to make it easier to consume when we're talking about these services. So let's talk about what brings you here today, okay? So what we're gonna do today, we're gonna try Kafka and service registry. So the goals for this session are supporting you on the creation, provision and usage of the managed Kafka service as well as the service registry. The third step, you're gonna deploy a Quarkus-based Java application to an OpenShift cluster. And we're gonna help you out or show you how you can produce and consume Kafka messages in Avril format. So what are you gonna need for this today? So first, your Red Hat account. We send you an email yesterday and during registration that we ask you to please create a Red Hat account. We need you to have those credentials handy. Those credentials are gonna let you use all our environment. And once you're all authenticated, it'll be very easy for us to give you access to the system. So this is not a marketing thing, this is really just a user-based authentication. So I just need you to make sure that you create a Red Hat account. Second, you're gonna have to request a Kafka instance. If you didn't do it before, we're gonna walk you through it. Third one is that you need to request a service registry instance as well. All of this happens on the same page, very easy. We started oriented menus, you'll get there fast. And then the last one is to request an OpenShift cluster. So we're gonna have, you're gonna be exposed to a lot of technology today, many systems. So pay attention, we have all the links for you. We have all the steps. And even if you get lost, you're gonna be able to read it yourself. We're gonna keep a very fast pace today. So again, the links that you need today, I copy them on the chat, first one. They're organized here by priority. The first one is the Red Hat Scholars GitHub IO. If you see that link on the chat, please click on that one. That one will have you all the step-by-step instructions that you need to complete this workshop. Bernard will work or walk you through many of the steps, but I need you to have that handy, just in case you lose your pace or you wanna go faster or you just wanna have the link to do it later. The second one is our DevNation Slack channel. So we love Slack at Red Hat. And basically what we have here is a DevNation channel where we carry on very amazing technical discussions. You'll get a lot of support for developers, DevOps, for architects, all of you that are looking for information that you wanna troubleshoot, that you wanna ask questions, please engage with us, ask us questions. And finally, you have the workshop guide. You won't be needing that one today. That guide does have the links. So if you don't wanna copy them from the chat, you can find them there. It's a nice guide to have. You can read it anytime. And that's pretty much everything that I have. So without further ado, what I wanna do is give it away to Bernard Cison and I hope you enjoy the workshop. Okay, thank you, Jennifer, for this nice introduction. So hi. So what we're gonna do today in a nutshell is what you see on the slide here, but let me put some context. So some of you might have been in one of our earlier workshops that we did around OpenShift streams for Apache Kafka earlier, well, last year, actually. So we're basically we touch base in the sense that you created an open-dose workshop, you create an instance of Apache Kafka and then deployed applications to send messages over this Kafka instance. Today we're gonna take it a step further and introduce service registry to that mix and why do we need a registry when talking about Kafka? Well, it's very simple. So Apache Kafka itself does not make any assumptions about the message format. So you can use things like JSON or just text or whatever mind type. So popular formats include like binary formats like Protobuf and Avro. And the advantages of these binary formats is that they are a lot more compact than for instance JSON. So you need less storage space, less bandwidth. The downside or the downside, let's say what they have as a characteristic is they require a schema to serialize and deserialize. And typically that schema is stored and managed in a schema registry. So what we call here are managed service registry. So what we're gonna do today in that workshop is provision a managed Kafka instance, provision a managed service registry instance, and then actually have two Quarkus applications, one that's a producer and one that's a consumer. So the producer will send messages which represent the quote that's not that important. So in Avro format, and the consumer will consume those messages, so the producer will get the schema from the registry or at the first time it produces if there is no schema, it might register the schema itself. So that's what we're gonna do. So then the schema is in the service registry instance and the consumer when it gets a message, the message contains the global ID of the schema from the registry so the consumer can get the schema from the service registry and use that to deserialize the message. So this is basically what we're gonna do. And with that, I'm gonna share my screen and turn to the share screen, there we go. Yeah, share screen and I'm gonna share this screen. So what you should be seeing is actually the front page of the workshop guide. So the guide is published on GitHub pages, so you can go back to it anytime you want. And it's formatted, so we have three steps today, provision a Kafka instance, create the service registry instance and then use Quarkus applications to actually use Kafka and service registry. We made the workshop in such a way that you don't have to install anything on your laptop to actually do the workshop. So the only thing you will need is a browser window. So the first page here in that guide is actually kind of introduction and you see the same architecture that you saw on the slide a couple of seconds ago. So we can move directly to the first section and that's provision a Kafka instance in OpenShift Streams for Apache Kafka. So for this, you need a couple of steps. I suppose that you have a Red Hat account. So if you have that Red Hat account, you can go immediately to console.redhat.com. So let me copy that link. I will paste it in the browser. In my experience, what works best is Chrome incognito, but Firefox should work as well. So but for this session, I'm going to use Chrome incognito. So I go to the landing page of console.redhat.com where it first asked me to log in. I'm going to use one of my accounts. So you should, if you follow along, definitely use your own account. So username and passwords. And that logs me in into console.redhat.com which is like our general landing page for everything cloud services, both OpenShift with also the application services. To provision a Kafka instance, I will go here to application services. And then you see for instance, oh, that's okay. You see this blue button here. I can click that one. That will actually go to your streams for Apache Kafka and then Kafka instances. I have no Kafka instance provisioned. So with the Red Hat ID and you can create the Kafka instance, a trial instance that will stay up for 48 hours and that you can use as you wish. So that's what I'm going to do. So create Kafka instance. It needs a name for definition. We only support one cloud provider at the moment. That's Amazon, but you can select your region. Now I am in Europe. So I'm going to take Ireland. That's closest to me. If you're in US, you probably want to take Virginia. And then I can do create instance. What you see here on the right are the information of the instance. So duration 48 hours and some other limits. But as you can see, if you're familiar with Kafka, those are pretty liberal limits. So you can really use that Kafka instance to do, to experiment with stuff during, during today's. So if I do create instance, my screen should be fresh here. Yes. And my creation is pending. This is going to take a couple of minutes, one, two minutes max. What we can do if you follow along, what you can do in the meantime, while the Kafka instance is being created, you also will need a service account. So both the Kafka instances and the service registrations that we're going to create later are protected, are secured with service accounts. So you need a service account to access those instances. So we can do here, if you, on the left side menu, application services, you click on service account. Well, you see that I already have one, but I'm going to create a new one for the sake of this workshop. You can have up to five service accounts per rather than account ID in this setup. So if I create a service account, again, I will give it a name. And if I do create, it will create a service account. Now what's important is that the service account has an ID and a secret. If I close this window, I cannot get to the secret anymore. So it's very important that you copy those because we will need them later. So you copy the client ID and you copy the client secret. Somewhere in a text editor. And then you say, I have copied and you can close that window and your service account is created. Okay. Let's go back to my Kafka instance here and see where we are. It's still being created. While this is going, we can move directly. So this is not exactly the same order as in the workshop, but otherwise, we will have too much debt moments here. So while my Kafka instance is being created, I can already jump to my service registry. So if I go here in the left menu, you click on service registry and service registry instance, you get an overview of the service registry instance. As you can see, I have none. So I can create one. I do create service registry instance. I will give it the name as well. So definition. We've had some limits. So here the duration will be two months. We'll stay up for two months. You can have up to 10,000 artifacts and some other limitations for this trial. If I do create, a instance will be created and that should be fairly fast because there is a difference. From a technical point of view, the service registry is actually a multi-tenant instance. So when you create a service registry, you get a share of a big hosted installation of service registry. So the time to create is just the time to create this tenant. So that's very fast. While with streams for Apache Kafka, you get your own Kafka instance that needs to be provisioned from scratch. So that's why it takes a little bit longer. But it should, I'm a bit surprised it takes. It's still busy. Let's wait a couple of instances more but maybe refresh my screen. That sometimes helps to refresh the state. No, it seems it's still being created. So, okay. So let's jump then to yet another phase that we will have to do anyway and that is the developer sandbox. Okay. So for that, you open a new tap on your browser and here the link is, the link is what you see. Let's see. I'm already at chapter three here in the guide. But we can come back at any time. It's just to, so here it will explain how you can get to the developer sandbox. So actually we're going to use CodeReady Workspaces which is a web-based IDE to actually run some Quarkus applications and we can use CodeReady Workspaces on top of developer sandbox. And all this allows us not to have to install anything on our laptop to actually do all this. So this is the link that I need for this, developers.reader.com and developer sandbox gets started. So if I go back here, click on that link, you go to getting started and it says launcher. So if this is the first time you do that with your reddit account ID, there might be some additional steps where basically your account needs to be approved for dev sandbox. And if you don't have a reddit email address that will probably include a phone verification and in case you wonder why all this is needed, the developer sandbox is actually a share of a hosted open shift so you can deploy applications and obviously what we want to avoid is that people start to create fake accounts where they start to deploy things to do crypto mining and stuff. So as a way to protect us against abuse, we do this phone verification so you will have to enter your phone number and then you get like a text message or the code which you will have to enter. But I don't have to do all this so I can go directly to my developer sandbox. Well, actually there is a second step, start using your sandbox and then this will normally open a new, well actually not a window, it's in the tab, goes now to my dev sandbox and it goes directly to the topology view but the only thing we're going to use today on the dev sandbox is cloud ready workspace and to get to the cloud ready workspace you see here this icon here, this kind of square icon with nine little squares. If you click on that one, you will see that red applications code ready workspaces. If you click on that link, that will open a new tab to the cloud to code ready workspaces. You click on the login link with the sandbox. You might need to give some extra permissions but that's all okay, that's expected. And then, yeah, this, because code ready workspace has its own SSO instance which only knows your username. He asked for some other stuff. So your email address, which in my case is, it's with a plus sign and I can put my name here and then I should be able to do submit and this will go to code ready workspace. So I will leave it here for the moment code ready workspaces later in this workshop. You're gonna create a workspace and do the work but now, but then my streams for Apache Kafka let's force a refresh here. Well, that's not what I wanted. So let's go back to my service registry and then back to streams for Apache Kafka because, okay, this is ready. So my Kafka instance is ready to be used. Now there are a couple of additional steps that we need to do that. That's first of all, we need to, and this I'll go back to the guide a second. So we're provision Kafka instance. We have our Kafka instance. We created a service accounts. So now we need to do a couple of things. So first of all, we need to bootstrap URL because we're gonna need that to configure or corpus applications. So if I go to this, this give up icon here for my Kafka instance and I click on connection. This first thing here bootstrap server. That's my bootstrap URL. That allows applications to connect to my Kafka instance. I will need that. So I will copy that and paste that in where I pasted also my service account client ID and client secret. I will also need a little bit down the token endpoint URL. So my Kafka instance is secured with a Cecil or beer mechanism. So that uses SSO behind the scenes. So I need the token endpoint to get a token. So I need that URL. So I copy that URL as well. And then I have already a service account. You can create a service account from here as well but I've already done that. So now I can actually, and that's the next step. If you go to the guide, so I did all this. Now I need to set permissions for my service account. Okay. For my Kafka instance. So if I go back to my console. If I click on actually the name of my Kafka instance, I will see number of screen with the number of tabs. So there is a dashboard. Now I haven't used it. So there is not a lot of data here to be seen. I haven't created the topic. So there are no topics. I have no consumer group. But what I want to do here is set the excess level. So if you go to the excess window, you will see that those are the default access. So all accounts that include all user names, all service accounts have some limited access to the Kafka instance. You can describe the Kafka instance, describe consumer group, describe topics. So that's not enough to actually produce the top, the topics consumed from topics we need more. So we can now give or service account the necessary permissions. So if I click on manage excess, I can select a service account. You can create all accounts, but I'm going to create the one that I just created. I'm going to use this one. So it was called definition. Okay. If I click next, those are the existing ones because those were for all accounts. But now I can assign permissions for my service account. And here again, I can go fairly wide or fairly granular. So for this workshop, I need actually two permissions. I need to be able, I need to able to produce, but I got to use manage success here. So if I do manage excess, no, actually they changed that screen since yesterday. Okay. I need to do something else here. So they actually changed this dialogue. So this is the first time I see this. So I need to be able to consume from a topic. Yes. So topic starts with, okay. So I need to consume from a topic. I can use the is, and then I can use a star. So let's say that's for all topics. Okay. So, and then the consumer group. Same thing. I will give rights to all kinds of all consumer groups. And then I can use a star here. And then I can, that's one permission. You know, the permission that I need is I need to be able to, oh, but here I can do now other things. Oh, this is a bit confusing here. If I do topic is again, start allow and I can do right. Okay. So this will give me read access to all the topics and write access to all the topics and read access to all the consumer groups. And I think this is more or less what I need. Consumer group. Well, actually I can change the topic one just to be on the safe side. I can do allow all. Okay. So if I save that now, this. Okay. Oh yeah. My service accounts can read all the consumer groups. It can has rights to all the topics. And then this actually, those are now obsolete. They are superseded by this one. But this is, and I'm going to check just to be sure that this is more or less. Yes. This is what I need to be able for my service account to consume and produce to my Kafka instance. Okay. So now that we are here, we can also create a topic that we will need. So there is a wizard for this as well. So I don't have any topics yet. So I can create a topic. And for the sake. So a topic needs a name for this workshop, my corpus applications, they expect the name quotes as a topic name. So they will use a topic name quotes. So I'm going to use this as the name of the topic that I'm going to create here. And then you can choose the number of partitions. So partitions is one of the things that makes Kafka scale because it allows you to scale out your consumers for this workshop. One partition is enough, but you can create more if you want. So let's keep it to one for this example here. Retention time is how long messages in a topic will be retained. So the default here with the managers or managed instance is a week in time and unlimited size. As I know that my Kafka instance will be destroyed within two days. A week is more than enough. So I can keep the defaults here as well. And then the replicas is not something that you can change today. So the managed Kafka instance has three brokers. So we use three replicas for every topic with two minimum in sync replicas. So this is more informative. You cannot change those values. If I click finish, my topic will be created one partition, seven days retention time. Well, here it's in milliseconds. That's a lot of milliseconds and unlimited retention size. Okay. So that's what I need to set up my Kafka instance. So I did the topic as well. If I go back to, now we can go to service registry instance. I already created my instance. What I still need to do now is to, okay. Yeah. I need the URL to connect. So if I go to service registry and I do service registry instances, I click on this kebab icon on the right. If you look at the connection, you will see a number of things here. So actually we, the managed service registry supports three APIs. So there is the core API, which is like the native API for service registry. We have a compatibility API and compatibility here is with the Confluent Schema registry. That means if you would have applications that now today use Confluent Schema registry, you can port them without hassle to our service registry by using the compatibility API. And then another API that we support is an upcoming standard from the cloud native computing foundation, which tried to standardize an API for Schema registry. So we support that API as well. The one that you're going to use today is the core API. So I'm going to need that URL when I configure my Quarkus application. So I copy that to the clipboard and I paste it there where I paste all the other things like my Bootstrap server and my client ID and client secret. So that's one thing. And second, I also need because I'm going to publish and retrieve Schemas. At least my applications are going to do that. So that means that I need to give access permissions also in the red service registry to my service account that I created. And I can do that. So if I go to the service registry, we're going to do that in second, uploading artifacts. But the first thing I need to do is set access. That should normally open a window with access rules. Should, but that takes a while. I'm not sure what's going on here. Let me refresh to see if that comes back. Service registry instances. Okay, yeah. Refreshing help. So there are no roles assigned. I ground access. So you can select again a service account here. I'm going to use the same one. My definition serves account. And the role that I need is called manager. Manager can write and read artifacts from the service registry. So that's what my application is going to, my applications are going to require. So I can save this. And you see that my service account is now a manager. So that's a little bit the setup. Now, before we move to our, our applications, I think we're still good with time. I think so what I can quickly show you is what you see in the guide in the workshop guide is you can, is this part here, you can like upload artifacts from within that UI and even manage them. So what I can quickly show is how to upload a artifact in the service registry. Okay. So you can do that from this UI, but you can do that through the API as well. And your applications can do that automatically. But if you do it through the UI, you have like a wizard. So you can take a group. Let's take here the same, let's take the same values as in the guide. So a group is a kind of namespace if you want. So I can give it a ID when I use here my ID. That's fine. I can auto detect the type or I can specify it. So we support number of kind of shimas like XML, wisdom, Kafka connect shimas, GraphQL, async API, open API, Avro. Well, let's, let's use Avro. And then you can drag a file if you have it on your, if you have an Avro shima on your file system or something, but you can also just copy paste it. So if I go to the guide, this is actually my shima and there is like this copy logo here, this copy icon. So if I click on that, this is copied and I can save it here. And if I do upload, this will create my first artifact that is now listed here. So you see it has a group name. It has an ID. The name comes from the shima itself. And you see that it has a global ID and a content ID. So this ID will be used by applications to retrieve the shima when they need it. I can look at the content and you will see that it's exactly what I upload. Okay. So that's, that's a very simple Avro shima. For sake of time, I will, I will leave that here. If, if we got time at the end, I can come back here and see how you can use validity and compatibility rules to manage the versioning and the evolution of your shima. But I think for the sake of time, I think it's best that we move to the third part where we were going to start using our Kafka instance and our service registry through some Quarkus applications. Okay. So we're now here. We did the, the, the, the, we accessed our dev center. Our dev sandbox. We started up code ready workspace. Now we need to build or workspace. So if I, one of my tap, this step here has like a, how maybe make that a little bit smaller first because at the moment, yeah. So there are several ways to create workspaces. So code ready workspace is a browser based ID, cloud native browser based ID, which has the notion. So it allows you to pre-configure a complete ID. So one, one use case is you have a team of developers and they all need to work on the same project. So you can set up the, the ID with the project source and all the tooling that they will need in the code ready workspace and then hand that over through a defile to every of your developers so that they can very quickly spin up a workspace that's completely pre-configured. So we have some sample workspaces here that are pre-configured for this workshop. I made one myself, which is hosted to get, so you can import to get URL. And if you go to the guide, that's the URL that is here. So I can copy this, paste it here. And then do create an open. And this will create a workspace based on the defile. That's part of this git repo. So if I do create an open, this is now going to be created. That might take a couple of minutes as well because he needs to pull some images for the works, for the workspace, et cetera. So if I go to logs, I should be able to follow this. At least something should get starting here. I can also see if I go back to my topology of my that thing should start to appear here. So for my workspace that's being created, that takes a while to get started. Okay, it starts to do stuff so we can see that here. So why this? Because this is going to take a couple of minutes as well. Maybe I can go back to my service registry and show you the thing with the compatibility rules. Okay, so let's let's my workspace come up and I can go back to my service registry. Okay, so I have now here a artifact that I uploaded myself. I can now enable or on a global service registry level or on a per artifact level, I can enable content rules, so validity rules and compatibility rules. So I can let's say enable the validity rule and then I can say, okay, so that for instance, every new version of that artifact that I would upload needs to be valid. And here for instance I say syntax only. I can also enable compatibility rules where I have a little bit more choice what I can do for the sake of this example, let's say that I want backwards compatibility. So that means if I create a new version of my schema and I want to upload it, I want to make sure that that new version is backwards compatible with my existing version. So do not break applications, okay? So now that those rules are enabled, I can upload a new version of my artifact and let's first see those rules in action. Let's say I need to go back here to my create a service registry instance and I have here a new version to the end. Okay, here we have. This would be what you see here, a new version of the same schema. The difference is now I have a new field here. So full name also has now the notion of a middle name and in Avro speak the way this is defined that means that that field is mandatory. That also means that it's no longer compatible with the previous version because the previous version didn't know that field. So it cannot be mandatory for older versions of the schema. So what I expect now if I copy that and I try to upload it is that my compatibility rules will refuse this version because it breaks the backwards compatibility rule. So let's upload my artifact. And indeed it says the content is invalid because I have a incompatible difference. So I do a rule violation exception here and this is this mandatory field. Okay, so I have to fix this and to fix this I can and I go back to my guide. I can go back to a version of new version where basically that's what you see here in this line. I made that new field optional. It's the middle field as part of my schema but now it can be a string or it can be no and the default is no. So that's how in Afro you define an optional field. So that means that this version of my schema is backwards compatible with the previous one because my new field is entirely optional. So if I try to upload this one it should normally be accepted. I'll upload that. Yay, that kind of works. So now you see that I have two versions of my artifact. One, two and the latest which points to two and you see that my new version has a new global ID and also a new content ID. So and I can go back and forth between my two versions here. Okay, so this is just an illustration of what you can do with like version management stuff or that completely through the UI. Now let's go back to my code ready workspace which in the meantime have been at least started up and now you will see, okay, before to finish the start of my workspace there is like this little bit annoying thing but I don't know how to disable that but because I'm importing a couple of projects from GitHub into my workspace the ID asked me if I trust the authors of that GitHub repo considering that I am the author. Yes, I do trust myself. So I can click yes, I trust and then you will see that two projects are being imported and those are the source code of my Quarkus applications that I'm gonna use to actually send, produce and consume messages. So the rest of those things I can ignore for the moment I'm not gonna install extra plugins for this so I'm gonna ignore that message I'm gonna ignore all the other ones so I'm gonna make that a little bit wider so what you see here is that I've got two projects I've got a producer and a consumer those are Quarkus applications, Maven applications so you have like the normal Maven structure and I'm gonna do two things here I'm gonna launch my consumer first and then I'm gonna launch my consumer so we're gonna launch them in the IDE I'm not gonna really deploy them to OpenShift I'm gonna run them in what we call Quarkus Dev Mode directly in my IDE so let's start with the consumer so you have a pump fight so before I do that the way it's gonna so we're using Avro so in my source code there is a Avro Shima which is also a very simple Shima so it has like a namespace and then it has a record type which is like it's type record and the name of the record is a quote and the quote has two fields it has a name field, it's a string and it has a price field and that's an int and we use the same Shima obviously for the producer so the producer is gonna create Avro gonna create Kafka messages which represent the quotes an ID and a price and the consumer is gonna consume those messages and show them in the browser window as they arrive so we use Quarkus so that means that the source code is actually very very simple it has like one there is one class so we use like reactive messaging in Quarkus so I don't have the time to go too deep into details we have like seven minutes left here so basically but it has a notion of channels and I'm gonna produce consume from the channels and then produce a server an SSZ stream that I can consume in my browser okay and then most of the things here happen in configuration and the way this works here is so you have a number of configuration I need to configure my registry with a realm what's earlier well client-to-declined secret so that's my service account the registry URL that we copied before and for my Kafka instance I need the bootstrap server and I need to configure authentication as well so all those things now I can use environment variables for that I'm gonna set those in a terminal in the IDE and then launch my Kafka application so let's do this what I can do in in my IDE is I can launch a terminal so if I go to the top menu and I do I click open terminal in specific container you will have a number of choices the one that we are interested in is I have configured a Quarkus Maven container and that's exactly what we need a terminal in there so that I can launch my Quarkus application and then I can point to one of my two projects here so I said I would do the producer first nope, that's not correct I'm gonna do the consumer first sorry for that Quarkus Maven, consumer first so that points to my consumer work project here so I need to go to the consumer sub directory whoa okay that's neat and I need to be CD consumer okay and if I go you will see that I have the structure here with my primary target exactly the same thing that you would see in the overview the first thing I need to do before I launch my application is set all my environment variables so if you go into the guide use Quarkus applications we are somewhere kind of here so you have like this whole block of things the thing you will have to do is replace all this with the values that you copied before from your Kafka Bootstrap URL, your registry URL all the things that I pointed out now I have done that why I was speaking so I have my all my environment variables here with that I should be able to launch my application and they should at least launch this application should connect to Kafka and then wait for messages that are posted to the quotes topic so to do that I can do Maven, compile and then Quarkus Def, so Quarkus Def is actually a very useful thing to run Quarkus application in directly in your IDE or in a terminal with automatic debug and stuff like that so very useful so if I do that that should build my application might have to download the internet first remember this is Maven, no this is fairly quick it's good, okay so it build the application now I'm just gonna start actually the application in dev mode so we should see some stuff happening here on the terminal so actually my application starting up there we go so we have all my my Kafka things here and then the quote seems quite okay and then it says here this popup process quotes endpoint is now listening on port 8080 and I say yes because that will open directly in this will open a new a new tap in my browser pointing directly to the application so you will see now I'm on the price the quotes page of my consumer application is happening here because there is no producer that sends quotes into the application so that's what I'm gonna do now and I have a couple of minutes left that should be just sufficient so back to code ready workspace I open a new terminal for my producer so Quarkus Maven and it's the producer that I want okay so directly here this producer okay I copy the same block as the consumer I'm not gonna go here through the application properties and stuff like that for the sake of time but they're very analogous to what we saw so basically it uses the same environment variables and now I can do the same thing here Maven to file and then Quarkus that will launch my producer app running in my terminal in my IDE on that sandbox so that will take couple of seconds as well compiling then starting up yeah this error can be ignored because I have now two deaf two two applications running in deaf mode so they both try to use port five thousand and five for debug but I'm not using debug for this one so I can ignore that that error so that should not prevent my application from starting up and start I can ignore this one I don't use port so there is no actually no real UI for this producer so I click no here and this normally my producers should be starting producing messages in the easiest way to see that is by refreshing that page here and if the demo gods are with me yes you see quotes appearance every couple of seconds and you quote will appear so what happens my producer sends messages in other format to this topic the consumer retrieved the shima from the service registry and I will show that in a moment where that shima is and these serialize it and shows it on that screen and so to finish up even now go to my service registry instance and I go to the definition here you will now see I have a new that artifact was pushed there by the producer app who couldn't find a suitable shima based on the topic name and the and the namespace in the registry so it pushed the other shima automatically to the service registry and added the global ID of that shima into every message so that the consumer can retrieve it from the service registry if we look at that shima it's an avro shima you see it has like also name a group of things that has its own IDs and I can look at the content and I can format it here and this is exactly the same as what we saw in the search page I might still have open here yes this is exactly the same as the one that was part of my source code so that shima is now part of my service registry and this kind of concludes my demo so there is still a clean up section you probably want to I can stop my applications here by doing ctrl c so if I do once more you see that those codes they come in they keep coming in so the producer consumes them and everything works as expected which is always fun so I can stop my consumer here just do ctrl c I can do the same for my producer when I can restart them as much as I want so but that that's what the workshop is all about I don't know if there were I think I haven't took care of questions that were in the chat I think so as far as I'm concerned this is it so if you follow the long you have your service registry instance so your will stay up for two more days so feel free to experiment with it if you like your service registry instance will even stay on way longer you can delete it if you know that you won't use it you can create another one whenever you feel like so based on your reddit id you see how easy that all was so with that that's it for me