 Everyone, welcome to this new session for our Red Hat OpenShift Strings for Apache Kafka. Today, myself, Jennifer Vargas and Evan Shorties will be guiding you through these workshops. Let's get started. Our agenda for today is first, I'm going to do a very, very quick introduction about Red Hat OpenShift Strings for Apache Kafka. Then we're going to go to the workshop details, and then it's going to be guided lab time with Evan. There's two links that are being copied right now on the chat. The first one is for you guys to join us on the DevNation Slack channel. So if you please can click that link, go and try to open all these windows while I do the intro. We welcome you to please give us feedback, ask your questions on the chat and engage with us. Let us know if you want us to go slower, faster. Anything that you need, please let us know. The second link that you're going to see there is the guide for this workshop. You probably received that during the registration. So that guide, steps one through three, our introductory step four is where we're going to have all the details step by step for this workshop. So that one is the one that you should be looking at when you're driving your own practice. So let's start. Red Hat OpenShift Strings for Apache Kafka. So the first thing that I want to talk about is about Red Hat Cloud Services. So one of the things that Red Hat is doing is expanding the open hybrid cloud vision. And what we're doing right now is launching a set of cloud services that basically looks into providing full stack management, second, a unified experience, and also allow the option to maximize the full value of Red Hat OpenShift. There are three types of cloud services. The first one is the platform services, which is everything that has to do with managed OpenShift solutions. And what that means is that we have different offerings in different cloud providers of our Red Hat OpenShift container platform. The second part of it are everything that we're building to complement OpenShift and the cloud, which is application and data services. Today we're going to put our attentions on Red Hat OpenShift Strings for Apache Kafka, which is one of application data services. All these services are natively integrated with OpenShift. And the idea is to provide you with a unified experience or a unified platform for you to build cloud-native applications, stream-based applications, or real-time applications, okay? So Red Hat OpenShift Strings for Apache Kafka is a fully hosted and managed Kafka service for stream-based applications. It has been designed for IT developments that are looking to create real-time experiences or real-time applications where they actually need to move or stream data, move vast amounts of data or stream data. On the right side, we have some of the customer benefits that you will get from using the solution. The first one provides faster application velocity, allows you to begin developing immediately. The second one is that we're providing you with a simplified environment across all the hybrid cloud environments. And the last one is we're providing you with a Kafka ecosystem for you to be able to build different stream-based applications. Kafka is a medium, is another tool that you will have to complement when you're looking to build real-time experiences. So on the left side, you have basically our graphic or diagram, however you wanna call it, of what is in the box for OpenShift Streams. We have the Kafka cluster, we have added configuration management, metrics and monitoring. You have a UI, which you will experience today. We have a CLI customized for us. You will see that today. We have something that's called a service binding, more to come about what that can help, how that can help you to connect your applications that are living in OpenShift. And all of these is providing a very streamlined developer experience that makes it very easy for you to use the solution. So long name surenates names. The real name of the product or official name of the product is Red Hat OpenShift Streams for Apache Kafka, but as you can tell, it's a very, very long name. And we like to suren to OpenShift Streams. But you might hear us say a few other things like manage Kafka or Kafka, just to make sure that, you know, we're making this more dynamic and we're not getting all tongue twisted with this very long name. So what is in it for you today on this workshop overview? First one, we want you to try Kafka. This is our guided workshops, step by step. What you're gonna learn today is first, create and provision a fully managed Kafka instance. Again, you will get your Kafka instance, you will create a topic. Second part of it, you'll be able to deploy a Quarkus based Java application on an OpenShift cluster. And finally, we will help you out or we will guide you into connecting your fully Kafka instance with your OpenShift cluster and then binding your Quarkus application with that Kafka instance. So what do you need for that? What do you need for today? Besides your laptop, you need to create a Red Hat account and have your credentials handy. That's explained on the guide on step four. Second thing you need to do is request a Kafka instance on the console.redhat.com. And the third thing is you're gonna have to request an OpenShift cluster on the developers.redhat.com, which is the samples. We will walk you through that. Evan is gonna go through each step and we will help you out through the process if you have questions. If you have suggestions, feedback, just write them down on that definition Slack. We really wanna hear from you. So links, links, links. There were copy both of them on the chat. First one, definitions channels and the second one, your guide. I think I gave you already these recommendations and we hope you were able to find your way to both of these links. Finally, the quick start architecture. So one of the things that I wanna to present here was an architecture of what we're doing today because I talk very fast and I told you what you need but I didn't tell you how this is gonna work, right? So on the first side, on the left side, we have the developer sandbox. The developer sandbox is an environment that allows you to have your own OpenShift dedicated, okay? So basically what you're gonna do is have access to that OpenShift environment. Within that OpenShift environment, we're gonna run through a quick instructions. One of times we're gonna start, we're gonna install that CLI I talked about that Kafka is offering you. The second thing that we're gonna do is install the service binding operator which is also one of the features of this Kafka service. And then we are gonna start install a Quarkus application because a Quarkus application is gonna behave like your producer and consumer when you're trying to connect to your Kafka instance. And finally, we have something that are called the quick starts. There's more information on the guide about that but basically quick starts are just guided instructions or how to let you or support you in getting started with Kafka. So you can, there are very scripted, I will say steps that you follow using the service and it will help you out to do things on the Kafka on them with the Kafka service. On the right side, you're gonna create your own Kafka instance. The Kafka instance that we give you is a dedicated instance. So it's only for you. And in there, you're gonna wanna learn a little bit more about Kafka, things like what is the connection of your Kafka, what are the parameters that define that cluster. You will also have the chance to create a topic and understand things like partitions and replicas. And final, you'll be able to see the traffic going from your Kafka instance to your Quarkus app. So we have a lot to do today. Let's get started. Here's just a guidance in terms of what are the name of the quick starts. Don't worry about this. You don't have to remember anything. Everything's in the guide and Evan is gonna guide you through every step. So now, without further ado, I'll leave you with Evan who's gonna guide you through the workshop. You're on mute, Evan. There we go. I was saying it looks like I'm on stream and now you can hear me, right? Yeah. Yes. Awesome. Okay. Thanks for the intro, Jennifer. All right. So that was a lot of information to digest, right? But I do see when I go over to the document here that there's a bunch of you in here, which is great because that means you're all going through the setup steps. The main thing to do today is not necessarily listen to me. I'm here to help Bernard, Jennifer, and we're all here to help, right? We want you to be successful today in the workshop. We'd love for you to get through all of the exercises but what I'm gonna do here and now on screen is I will go through them as well. But really, we would love for you to go through them yourself at your own pace and you don't have to keep up with me. You can ask a question about anything you need to whenever you need to. Like Jennifer mentioned, and I think it's in the chat as well if you scroll up in the chat, but if you don't have it, there is a link. I think I can put it back in the chat here for a Slack and we have this Slack channel over here for the Kafka workshop. And if you're not sure about anything, just pop in here and you can directly message myself or Bernard or Jennifer. You can see me here so you can click on my profile and private message me if you're not comfortable asking your question in public. But if you ask it in public, then that's great because everyone gets to share knowledge and there's no dumb questions here because we're all new to this service I'm guessing. All right, so with that out of the way, like I said, the goal today is for you guys to get signed up to use this service and actually have some fun, hopefully. So the first thing you need to do is head over to cloud.redhat.com or console.redhat.com. Either one of those links will allow you to get to this page that will allow you to create a new account. Now, I already have an account so I won't need to create one, but if you click on this, it'll bring you to the page. You can fill in your details and select the personal account for the Red Hat Cloud, get signed up, it'll take you two minutes. And once you do that, you can then head over to console.redhat.com, your web browser and sign in using the account you just created. So for me, I will sign in using my account. And once you're signed in, you're brought to the dashboard here. So there's a lot you can do from here. You can provision OpenShift Kubernetes flavor, or sorry, OpenShift clusters, which are Kubernetes clusters, but Red Hat flavor. But today we're not going to focus on that. What we're going to focus on is the application services up here in top left. So this will allow you get access to the higher level services like Kafka. So if I click that and brought to the application services homepage, and from here you can see there are different services. The one we're going to focus on today is Streams for Apache Kafka. You can click on the link here on the left and expand it. And you can see there's documentation and you can see Kafka instances. Another thing to be aware of is all these are marked as beta. So this is, you're getting an early access or early preview of these services. So when you click on them, you might get a warning that looks like this the first time you use it. That's totally normal. You just have to accept that it's a beta feature. And as a result, you should expect that things might not always work flawlessly. But once you do get in, you get brought to this page where you can create Kafka instances. Now you can see I already have an instance listed here in North Virginia and running on AWS. To create your own instance, there will be just a big blue button and you can click on that and it'll bring up a dialogue. And that dialogue can be used to create an instance. It takes a second. All you have to put in is a name. So, you know, you put in a name, you can name it whatever you want. It's your own private Kafka instance. So yeah, name doesn't matter. It doesn't have to be anything in particular. And then since these are trial accounts that you're using or you're using the free tier, you do get told, for example, that the duration of the instance is 48 hours. So the Kafka you create in this lab will disappear in two days. And also, we don't give you the option of different cloud providers in different regions. You just get the standard free tier region and you click create instance. And once you do that, depending on how busy the service is, it usually takes less than five minutes for these things to spin up. They actually can spin up as quick as two or three minutes but because there's a few people here, it might take an extra minute. But once it is, after starting, you'll get this nice green check mark telling you that your instance is ready to use and we can give you things like a bootstrap server URL over here on the right, which we'll use in the lab later. And you can create service accounts that allow you to access or connect to that endpoint using Kafka clients. So that's a sassel username and password. And I'll show you how to do that in a moment. So once you come in here into cloud.redhat.com sorry, console.redhat.com and provision the Kafka instance, the next thing you're gonna want to do is to get set up in a development environment and you don't need to bring your own development environment today. Like it doesn't matter if you have an IDE installed or Java or anything like that. We're going to be using developer sandbox, which is what you can see on my screen right now. And that's available on developers.redhat.com. And again, it's on the Word document that we shared. So if you don't have these links, just go to the document and you'll find them. So yeah, over here, what we're going to look at is the developer sandbox. And the developer sandbox is exactly what it sounds like. It's a sandbox environment that you can use to deploy containers and applications, databases and other things on top of the OpenShift. And it's a pretty generous environment. It has a few gigabytes of RAM and a decent amount of CPU. So to get started with the sandbox you can just come to this page, click the Get Started button. And after a moment, you'll get a little box here saying you're ready to start using your sandbox if you're logged into your Red Hat account. So it's going to use the same account you just signed up for a moment ago to use on console.redhat.com. So it's the same account being used on both of these services. And once you've logged in and you've started your sandbox, click the link and you will get brought to a display that looks something like this. And you effectively now have your own OpenShift environment. And similar to your Kafka instance, this isn't, it's private. So anything you do, anything you work on in here, only you can see it. So again, don't worry about stepping on people's toes. It's all sandboxed for you. And once you do get in here, if you go back to the document over here, we kind of guide you through it, but basically we have Quick Starts and we explain how to follow them here in the document. So for example, if I go to the top left over here in my sandbox, I can find the Quick Starts right here. And if I click on that, I'm brought to the list and you can see those various Quick Starts for the Kafka service that we're using today. So what I just showed a minute ago, we're going to be connecting that to this development environment or this OpenShift environment. Now, the one thing to be aware of, and we do mention this in the document with the instructions is there are some duplicates in here at the moment for the guides. So for example, you can see here the binding Quarkus applications is duplicated. So whenever you're doing these, choose the one with a little spaceship icon or the rocket icon, that's the correct one to choose. And like I said, the document will explain that. So don't worry, just follow along with the document at your own pace and you'll do just fine. So the first guide that we want you guys to do once you have your Kafka instance set up is the Getting Started with OpenShift Streams for Apache Kafka guide here or Quick Start, as we call them. So to get started with those, you just click the tile and it shows up in this nice panel on the right. So it's embedded in the same context where you're working. So you can have the instructions on the right and the work you're performing on the left. And this first guide, it's nice and easy to get you started at the service and it will basically guide you through what I just showed, which was creating that Kafka instance over here. So if you didn't, like I said, if you didn't follow along or I was moving pretty fast there, you can take out your own pace. So once you get into the developer sandbox, it will guide you through things once you follow the guide. So for example, the first step here, it's telling us how to create a Kafka instance. So just to recap, head over to console.redhat.com, select application services and click that create Kafka instance button under the Streams for Apache Kafka heading. Takes about two or three minutes, like I said, to become ready. And once it is, the next thing we want you to do is start creating some service accounts and those can be used to connect to your Kafka instance. So create service account. Again, it's right over here in console.redhat.com. You just have to go to the service accounts section and you can see I have none at the moment, but it prompts me to create one. And all I do is click that button, give the service account a name. And if I want the descriptions optional, you don't put that in. Click create. And what I get here, and by the way, I'm showing this to you in the lab, but you naturally shouldn't share this with anyone. But what you can see here is a client ID and a client secret, kind of like a username and password. And those can be used by your Node.js app, your Java app, your Python app, whatever tool you're using, that supports Kafka to connect to a Kafka cluster, which is what you created here earlier. So they're effectively your SAZL username and password of client ID and secret. And you wanna copy those someplace safe. You don't necessarily need this one for the lab, but if you're using it with, you know, your own applications in the future, you'll need one of these. So once you're happy to copy it down, you can just close the dialogue. And now you're ready to use this username and password or client ID and secret that you just found to connect to your Kafka instance over here. And like I said, you can find the URL for your Kafka instance by going to the connection settings here on the right. So you just go here and get your bootstrap server and you can connect. So that's part two of the first lab we want you guys to go through. And then for part three, again, a really important topic when it comes to Kafka is topics. So this part three of the first lab teaches you how to create topics using the user interface of the service on console.redhat.com. So you can see here, it tells you to head over to console.redhat.com and to select your Kafka instance and create a topic. So let me show you how to do that. So I have my Kafka instance here. I just click on it. And by the way, when I say click on it, I didn't click on the name. And that brings us to the overview of our particular Kafka cluster, because you could have multiple clusters, right? So you need to click on them to get into the details for the individual cluster. And you can see here in the dashboard that I'm not using any disk space because I haven't written anything to my Kafka yet. And there's no metrics because I haven't written anything to Kafka. We also have permissions over here. We're not gonna do anything with those in this lab, but it is something interesting maybe to explore in future. If we had applications connected, we would see consumer groups. So the consumer groups that those applications are part of, but again, we have nothing connected right now. And then finally, what we're actually looking for as part of this guide is the topic section. So if I click the create topic button, I can put in the name here. So this quick start here says, put in something like my first Kafka topic. I'm actually gonna put in prices because we're gonna use a topic named prices in the next quick start. So I'll just name it prices. And then for partitions, naturally when you're using Kafka, you can have easily a hundred partitions or maybe just use three for your applications. Either way, I'm just gonna leave it at one for the sake of this lab. We don't need to use multiple partitions for a demo app. And then finally, you can also set message retention settings. In this case, I'll go with the defaults, which is to retain messages in like Kafka topic for a week. And I'll set the retention size to be unlimited. So in theory, this could use all the storage. And then finally, this is not something you can change in the service. We've decided that your topics will always have three replicas and minimum of two in sync. So when you write messages, at least two of them are in sync, I guess, whenever you make rights. So I'll click finish. And then if we go back to the topic list, you can see I have a topic named prices and it tells me that it has one partition and the retention size is unlimited. And if we go back to the instructions over here, I can step next and confirm that I did see my topic listed. And that's it, that's the first lab. So the first lab, like I said, take it at your own pace, don't rush and make sure to ask any questions if nothing's here, but it will guide you through how to get your first Kafka instance on console.redhat.com, how to create a service account, and also how to create topics. Actually, I see an interesting question here. There's a question about using Avro schemas. And we won't cover that in this lab, but the answer is yes. If you're using a Kafka client, like the Javelins that support Avro schemas, you can absolutely do that. And we also have the service registry over here that you can use to upload your schemas for use with the service. So not gonna cover that today, but we do have a blog post coming very soon to developers.redhat.com that will go through that. So go to developers.com probably before the end of the week and that blog post should be live. All right, so I am going to finish this first lab and you can see it's marked as complete in my list here. And the second lab or quick start is going to be connecting or creating a connection between this OpenShift environment and the managed Kafka instance you created earlier or you're in the process of creating right now. And this is, oh, I got restarted, here we go. And this is a kind of cool or neat concept in the service or in the service. So we make it easy for you to effectively link your managed Kafka to your project or your namespace on an OpenShift or a Kubernetes cluster, right? Which makes it easier for you to say, inject the Bootstrap server URL and the sazzle settings into your Java app container or your Python or Node app container. And to do that, we're going to show you how it can be done using some CLI tools that we include with the service. So let's get started with this lab. And by the way, you don't need to use your own CLI. This guide will show you how you can actually use a CLI in this OpenShift environment. So let's get started. So the first thing we need to do is to get access to the tools. Now, like I said, you can download the tools to your own local environment, but for the sake of this lab, I'd recommend just following the instructions here because it's much easier for us to support you. And we know this is a working configuration. We don't know how it might work on your own machines. So you need the OpenShift CLI, which is very similar to the CubeCuddle CLI and the Red Hat OpenShift Application Services CLI, which is specifically designed to work with the services over here on console.redhat.com. So it'll allow you to interact with service registry, service accounts and Kafka. So what we're going to do is deploy a container in this environment. So in this OpenShift environment that you can use to interact with these tools or these command lines. You can see here, it instructs me to use the perspective switcher in OpenShift here to make sure I'm in the developer view. And from the developer view, I can click the add link. And this add link is like a nice wizard to make it easy to deploy things on OpenShift or to follow our guides. So what we want to do is deploy a pre-built container image. So you can click the container image option here and the guide here on the right tells me the image I need to use. It's hosted on key.io or quay.io. And it is this rose tools image. And once I paste that in there, it'll validate that the image that you can find it, give me a nice green check mark. And once I'm happy with that, I can scroll down here and I can give it a name for the application group. You can leave this as is and a name for the container. Again, just leave this as is. And then finally, we'll leave the resources as is. And we can just uncheck this box here for creating a route to the application. It doesn't really matter if you don't uncheck it, but we don't need a public HTTP or HTTPS endpoint for this. So I'll uncheck that to create. Oh, let me fix that. I was testing earlier and I already have the image over here. This won't happen to any of you guys, so don't worry. So I will do a C3S. These tools. Okay, so I'll try that again now. And I think on that, give me one moment. That's what happens when I don't clean up after myself when I do testing. There we go. All right, so once you follow this instruction on the right and ignore what I just messed up there, you will be able to get a container running here in your OpenShift cluster. And this container, like I said, contains the CLI tools we're gonna use. So you can access the CLI tool by clicking on the container and it'll expand this panel on the right. And then you can just click on the pod here under the resources tab. So once you click on the pod here, it gives you details about the pod that's running. So the container that's running. And you can click on the terminal and you get this nice web terminal where you can type in commands and use the CLI tools. So I'll click next here and confirm that I have the container running. And now we're gonna actually do the connection I mentioned earlier. So we're going to create a link between our OpenShift project and our Kafka instance running over here on console.redhat.com. So let me show you how to do that. So the first thing you need to do is sign into the OpenShift cluster from this pod. Now that sounds like, you know, inception, but basically this pod or this terminal rain, while it is running inside the cluster, it needs to have a permission to modify resources on the cluster. So to do that, I'm just gonna log in as our own user inside this pod. So you can get a login command by going to the top right corner of your environment over here and just click the copy login command link. You might get a pop-up like this asking you to log in. You can just click the link to log in using your red hat account and it gives you a token. So you can copy and paste that token and then paste it here into the terminal. And once you're in, you can see here that it says I've accessed the two projects and I'm currently selected the dev project. So that's perfect. I'm now logged in and just to be safe, you should always just make sure you select the project, dev like this or stage if that's the one you're using. You can see up here that I'm in the dev project. So I'll stick with that. Now that's the OpenShift command line tool set up and logged in. The next thing you need to do is log in using the application services command line. That one is R-H-O-A-S rows. So if I press enter here, you can see it has a login command. So I'm going to use that to log in. And to log in with that one, I can't just copy and paste the same command naturally. I need to type rows login-token like this. And you can see the instructions over here on the right explain all this, but you need to get a token from your OpenShift, from your RedHash Cloud account, which is the one at console.redhash.com. So I'll click on that link. It'll bring me over to the RedHash Hybrid Cloud Console and I have a token manager here. So I will load a token. I'll copy it. And then I will paste it here into the terminal and I'm now logged in and I'll try and hide that token. So if I use the Rode CLI now, I can do things like over here. It tells me to try out listing my Kafka instances. So if I type Rode's Kafka list, there you go. I can see my workshop Kafka that I created earlier. And that's the one I'm going to link in to my OpenShift project in a moment. So now that I'm logged in with the OpenShift CLI and the Rode CLI, let's do that linking that. And you can see on the right here in the instructions it tells me all I need to do to connect or link my Kafka instance that I provisioned on console.redhat.com to my OpenShift project is use the rows cluster connect command. So I'm going to paste that in here and it gives me an interactive prompt. So if I had more than one Kafka cluster or Kafka instance running on RedHash Cloud, I would just pick the one I want. Since I only have one, I'll press Enter. And then it asked me to confirm something before I actually do the connection. So the first thing is it asked me to connect, to confirm the Kafka instance I've selected. It asked me to confirm the Kubernetes namespace or OpenShift project, which is my dev project, as we can see here. And then finally it tells me it's going to create a secret in the project and that secret is going to hold my service account details. So I'll say yes, press Enter. And I need that token. I just got a moment ago again. So I'll just paste it again. Oh, that's not what I want to paste. Let me go back and get a token. And now that I've pasted in that token, you can see it tells me that it has, in fact, created this Kafka connection resource inside my OpenShift project. And we can verify that as the instructions are telling us on the right by running this OC get Kafka connection command. And you can see it has a Kafka connection to my workshop instance. So I'll click Next. And now it just, we're going to show you how you can inspect this object. So this Kafka connection object, we can do an OC describe on the Kafka connection. And if we take a look, it contains interesting information, right? So the first thing it contains is the bootstrap server host of our Kafka instance, because obviously applications need that to connect to it. So it contains that information for our applications, but it also tells our applications which secret in the OpenShift project contains our client ID and secret, or that username and password from the service account earlier. So we have more or less the information we need for an application to connect to our Kafka instance. And finally, you can see down here some other stuff that might be interesting. You can see the SASL mechanism is plain, and the security protocol is SASL SSL. So that's how our application will know which mechanism and protocol to use when it's attempting to connect to the Kafka cluster. And that's it for this second lab. So at this point, what I've done is I've created a Kafka cluster over here on Red Hat's Cloud. I've gotten an OpenShift environment for free from Dev Sandbox, and I have signed into that environment, and I have used some CLI tools to link my Kafka instance that's running on Red Hat's Cloud to my OpenShift environment. And this doesn't have to be this particular environment, it can be any OpenShift environment or any Kubernetes environment. So the next thing we wanna do is actually deploy a Java application and connect it to our Kafka cluster and produce some messages, right? And we want to consume messages as well. So that's what the next QuickStart will have us do. So I'm gonna click to start the next QuickStart. And this one, like I said, is gonna guide us through developing a Java application that will, or sorry, not developing, but deploying a pre-built Java application. So what I'm gonna do is I'm gonna open a new tab just so I don't lose my terminal over here. And to deploy this pre-built Java application, you can use the same technique we used as the last container we deployed. So that was the CLI tools container. But before I do that, actually, it's interesting to notice you can see now in our project topology view that I have this Kafka object listed. So as you can see now, my project is aware of my Kafka instance, and I can connect things to it. And it's visually represented. All right, so to deploy our pre-built Java app, I will go up to the top left here to the add menu. And just like before, I'll select container image and the instructions will guide you to this and will tell you to use this container image that again, it's hosted on key.io and it's just called the Quarkus Kafka Quickstart. So for this one, I'm just gonna change the icon so I can identify it as a Quarkus application. And Quarkus is a, I guess, a Java framework developed by Red Hat. Really nice to use to build Java applications. All right, so I'll select no application group here. I'll leave the name as Quarkus Kafka Quickstart and I will create a route to the application because I want to actually access this application in my web browser. Now, the one thing that might be helpful that isn't noted in the instructions is expand this advanced routing options. And you can see here there is a secure route. So this will basically just make sure that the route uses HTTPS. And you can select the termination type to be edge. And you can also select the insecure traffic type to be redirect. So basically any time you try and access your service will be redirected to HTTPS. It's just convenient because HTTPS is weird in some browsers. So yeah, we'll go ahead and click create. Okay, so again, I have the stuff flying around. So let me delete them. Again, this won't happen to you. It's just happening to me because I was practicing earlier. So I'll delete those things that are lying around from earlier, the duplicate things. And now I should be able to create my application. There we go. So once you create the application, again, it shows up in your project apology view here. And you can see it's got this nice little Quarkus icon. So if you click on it, once again, like the tools container, you get this nice little fly out with some details. And what we want to do is look at the logs for the application to check everything's healthy and working okay. So let's click on that and go to the logs. And if you look the logs here, you can see that there's a warning being printed, right? And the reason for that is because the application is complaining that it can't connect to a Kafka bootstrap server at localhost 1992. So basically, while we have injected our Kafka connection into our project, we haven't told our Quarkus application how to use it yet. So our Quarkus application is just defaulting to localhost 1992 like you might do when you're developing locally on your own machine or environment. So let me show you how we can fix that. So if we come back to our topology view, you can see, like I said, the application's running. And before I fix it, what I'll show you is we can also click this open URL link. And if I open the URL and I change the endpoint to the prices HTML, like it tells me in the instructions, you can see that, you know, the application doesn't think we're working the prices NA, right? That's because the application couldn't connect to Kafka, which is going to be the data source for this application. So let's go ahead and fix that. So to do that, in this lab on the right or in this quick start on the right, I've already created the prices topic and I've also already got the CLI tools set up. So I'm just going to skip to step four here, which is binding our Quarkus application to the Apache Kafka instance that you can see here in the topology view. So to do that binding, it's a service binding and it's like explained over here on the right. There's some operators running in the background on this cluster that can recognize the Kafka resources and the binding resources we're going to use in the moment that will automatically do the injection. So it will inject the Kafka details into our Quarkus applications pod. So let's see how we can do that. So if you look down here in the instructions, it's telling us to use the cluster bind command. That's kind of similar to what we did earlier in the terminal, which was cluster connect. So let's go ahead and run that cluster bind over here in our terminal. So it works again, quite similarly to the connect command. So it asks us which instance we'd like to connect. But this time it's asking us which application we want to connect the Quarkus, or sorry, which application we want to connect with the Kafka cluster, right? So I want to connect to my Quarkus app, so I'll select that from the list. It then asked me to confirm do I want to bind my workshop Kafka to my Quarkus application? And I'll say yes, and it says it succeeded. If we go back, oh, that was quick. Okay, so sometimes if you're quick enough, what happens is you'll see a new pod spinning up. I'm guessing I missed it, wasn't quick enough. And a new pod spins up really quickly because the container goes down and then spins back up. But if this worked and I go over here to my pod logs again, we shouldn't be seeing any warnings. And yeah, this looks good, right? So we're not seeing any warnings. And we can see that it says the Kafka producer network thread has connected, right? So yeah, it's looking good. Everything's looking good here. So now we can go back over to our application that I opened earlier. Again, if you're not sure how to get to this endpoint, you can find it by going to the topology view in your project and clicking open URL and going to forward slash prices.html. And now what we should see in a second is that price will start changing. So you can see now, it's like a stock ticker, right? It updates every few seconds. I think it's every five seconds through this application. So basically what's happening is the Quarkis app, it generates a price. It places it on the Kafka topic named prices. And then it reads it back to the same application as reading the price back and displaying it in the web browser. So it's a fairly straightforward application but it gets you a sense of how you can create a Kafka instance over here on console.redhub.com and link it to an application running on OpenShift or Kubernetes or anywhere, really. And now if we go back to our dashboard here and refresh. Oh, let me refresh my browser. Yeah, there we go. So the metrics haven't updated yet. If you give it a little bit of time, the total bytes here should start to show traffic flowing through. Now, because we're just passing numbers around, it's gonna be very low but you should start to see metrics if you leave this running for a long time. Are you change it to generate a number every few milliseconds or every half second or something or generate JSON data? Well, that's it. So we now have a project over here in OpenShift where we have our Kafka instance configuration injected into the project. So it's accessible to other services and services like a Java application that we have here can be connected to it. And they can use that information to connect to the Kafka instance. And if you're wondering like this seems a bit magic, like what you can do is go over here to the deployment details for this corpus application. And if you view the YAML behind the scenes, you can see that it's injected this service binding route which is saying it's an environment variable telling us that there's a path named bindings. And if we scroll down again, it's essentially mounting the configuration information into the application as a volume. So you can see it's saying here that our Kafka connection information details will be under the bindings forward slash workshop folder. And there's a plug-in inside the Java app to just read that. And then it knows how to connect to our Kafka instance. And if I click next here in the guide, I'll be finished up. So that's it. That's all three of the quick starts in this workshop. If you're still on the first quick start or maybe on the second one, don't worry. Take your time, ask those questions. We're still here for the next 10 minutes at least to help you be successful in going through the guides. So I hope that was interesting. And do we have many questions in the chat? I wonder, let me take a look. So I see Shruti has been asking some questions in chat with Bernard, which is great. And how about in the Slack channel? Has anyone been busy over there? Awesome, okay. So there's a little bit of back and forth. But yeah, like I said, don't be shy. If you're not sure about anything, just feel free to ask. And like I said, your Kafka instance lasts for 48 hours and your developer sandbox, I'm not sure if Bernard or Edson or Jennifer will correct me, but I think that lasts for almost a month or is it a month, full month? So you can mess around with an open ship for a month. And even though your Kafka instance is destroyed after 48 hours, you can spin up a new one. So if you don't get to complete the whole workshop by the end of this session, you can just try it again. You'll have access to that Word document or the Google document, sorry, with the instructions and you can do this whenever you feel like it or whenever you have time. Yeah, I'll close my application now and see if there's any more questions. Bernard, thanks, yeah, 30 days for the Dev Sandbox. So yeah, this environment I've been playing around in, you get it for a full 30 days, which is pretty awesome. And if you go to developers.redhat.com and check out some of our blog posts. So Bernard has written a great post about using Divisium. I've written a post about using this same technique with Node.js and I believe we'll have a similar blog that demonstrates what I did today with .NET applications very soon. So if you're not necessarily a Java developer, there are solutions for other runtimes like .NET and Node.js. And we have blog posts to demonstrate that. And those blog posts are also accompanied by videos. So if you head over to the Red Hat Developers YouTube channel, you can see video demonstrations of those as well. Oh, so Dex is after saying they cannot access the sandbox. Can anyone else, if you've had trouble accessing sandbox, can you confirm that? Just put it in a comment in the chat if you've had trouble. For Dex, maybe myself or Bernard can ping, or Jennifer or Edson, we can ping someone. Internally at Red Hat to find out if you send us a private message, maybe. We can look into why you didn't get access, but yeah, you should have gotten a text that would allow you to get in there. So yeah, maybe send one of us a private message on the Slack and we can look into that for you. I'm not sure why you wouldn't have gotten the text or maybe it's just delayed. Maybe you'll get it in a minute or two. Okay, Dex, everyone's shy. But yeah, if anyone else is having trouble getting access to the environment, please do let us know because we'd love for you guys to be able to follow along and then go through this. So Dex sends me, okay, we'll follow up on that after the session, Dex. And one other thing actually I haven't showed yet. So over here on console.redhat.com, like I mentioned, there is a documentation link for each of the services. So if you expand them service registry, there's a documentation link and for streams for Apache copy, there's also a documentation link. But we also have a learning resources section over here. So the quick starts I demonstrated in the session today, naturally are kind of tailored to work with the OpenShift sandbox that you're able to use. If you come over to console.redhat.com and go to learning resources, there's other quick starts you can follow here that will allow you to, you know, learn how to use the service, maybe with your own local environment. And also I believe it's not here yet, but we will have one for service registry. I know someone asked about service registry earlier. There will be a guide for service registry in here very soon. So keep an eye out for that as well. I see Chi here in the chat is saying that every time that they click on the load token at console.redhat.com forward slash token, they're getting kicked to the login page. We've seen that happen to someone before in a previous lab. And I think what helped was by using a private browsing tab. So if you open a private browser, that can help. Yeah, there you go. Bernard just commented there. So yeah, give that a go. I'm not sure what it is, but it happened to someone in a lab before and private browsing fixed it. I'm guessing maybe if it's a new account, maybe something is just, I don't think, is anyone's not sure what I'm talking about. I'm not about the screen here where you get your token to authenticate the CLI. If you're still getting the issue with Incognito, when you log in with the Incognito window, does it ask you to accept the terms of service? Because usually basically the reason for this in the past was because someone hadn't accepted the terms of service. And for some reason it wasn't loading until they used Incognito. So that's strange. Maybe we can follow up about that one in the Slack channel as well. And by the way, some of you guys saw my token today in the session. You can revoke your tokens by going to this page again and you can see there's an offline token APM management page. So after the session today, I'm gonna go over here and I am going to revoke all of these. So if you do happen to expose the token, that's how you can revoke it. So we're at the top of the error. I guess there's nothing else to cover. Get your questions in now before we wrap up. And for anyone who is having trouble like Chi with the token, we can continue to follow up via Slack afterwards. So I don't know, Edson or Jennifer, do you have any like closing slides we wanna show or will we wrap it up, will we wrap it up? Hey, Evan, no, I mean, the only stuff or information that I have left is to tell you that this Try Kafka service is available and you can, as soon as your Kafka workshop or Kafka instance fires, you can create a new one. Evan, talk about these. There's also a few other quick stars that are on the developer sandbox that you can do. Like there's this Kafka cat workshop, quick start as well that you can also follow on. But no, that should be it. Thank you so much, Evan. It was a wonderful workshop. And if anyone else has questions, please, you know, DevNation Slack channel and hope you enjoy the workshop.