 Welcome, everyone, to the new session of Red Hat OpenShift Streams for Apache Kafka Workshop. So today, we are going to walk you through a very cool workshop of our new product. The product is available today for use. You can try it, and it's a very exciting thing. You're going to see it. We're going to show you all the details. So what are we doing today? The agenda for today is we're going to do a quick introduction of OpenShift Streams for Apache Kafka. Later on, I'm just going to give you the workshop details. There is a guide that maybe our team can paste the link for that guide on the chat so you guys can start using it today. Let me see if I can have here the two links that you're going to need for today. And then after that, Bernard, he's going to walk you through the whole lab and a workshop. And we are here all for you for Q&A. So let's get started with the Red Hat OpenShift Streams for Apache Kafka. So Red Hat OpenShift, so Red Hat has been expanding their hop and hybrid cloud vision in the last year. And what this means, we're trying to make sure that we provide full stack management support and a unified experience in the hybrid cloud environment. And the way that we're doing this is by presenting a new set of cloud services. As you can see it on the diagram, we have three cloud services. The first one is platform services. The second one is application services. And the third one is data services. If you look at it in the diagram and you look at the bottom line of it, what you find here are the cloud providers. And basically these are here because all our cloud services run on top of each one of these cloud providers. We have joint offerings. So you can actually deploy the solutions that you want and the cloud provider of your choice. Right now on the platform services, we have a few offerings for Manage OpenShift, our main platform. And you can deploy those in the cloud provider of your choice. What we're doing now as expanding our vision is to provide a set of application and data services on top of that platform services. Those services are natively integrated with OpenShift and the goal is to provide you with a unified platform that allows you to build cloud native applications. Today, we're gonna spend all our time on the middle one there, Red Hat OpenShift Strings for Apache Kafka. So what is it? So OpenShift Strings for Apache Kafka is a fully hosted and managed Kafka service for stream-based applications. It has been designed for IT development teams that want to incorporate streaming data or streams processing data into applications that are looking to deliver real-time experiences or digital experiences to your customer. There are a few benefits this solution brings to you. The first one is faster application velocity because this solution is self-service. You get access immediately to the environments and you can get start developing immediately. The second one is that it has been designed to simplify the application lifecycle across hybrid cloud environments. And basically what that means is that we made it very, very simple for you that independently of where your application lives, you can connect today to OpenShift Strings for Apache Kafka. And the third one is that Apache Kafka as a technology by itself is not enough. And we understand that. So what we're doing is we're creating a Kafka ecosystem that will provide you with more and more cloud services. And that will support the building, deploying and scaling applications that will help you out on deliver those real-time experiences or to create a stream-based applications, okay? And those can be deployed wherever you want. On the diagram on the left, what I want you to remember from this diagram is that at the core, we have the Kafka cluster. So we're providing you with a dedicated instance that's only for yourself. There you will be able to have your brokers and your topics as many topics as you want. But what we have included to the rest of the solutions, it's a lot of features and functionality that makes these a much better experience to you. So we added metrics and monitoring, we added configuration management as well as we have this application layer that provides you with a UI that makes self-manage much easier. We provide a CLI that allows you to interact very easily with the Kafka cluster. But if you don't wanna use our CLI, we also expose the API so you can connect your own IDE or your own CLI to this service. And we also provide something called the service binding. Basically, the service binding is an operator that what that allows you to do is that once you install that operator on an OpenShift cluster, it will make it very easy for you to connect a Kafka topic with a workload or an application living on an OpenShift cluster. And all of this is hosted and managed by Red Hat. So you don't have to worry about the infrastructure, the only thing you have to worry about is learning and using Kafka. So long names, short names, names. The name of our product or official name of our product is Red Hat OpenShift Strings for Apache Kafka. But this is a very long name. So you're gonna hear us say OpenShift Strings, which is the preferred shortened version of the product name. But you will also hear us say Rosa, which is basically acronym. We try not to do that, but it may happen. We also call it managed Kafka or just Kafka. So if you heard us talk about this, it's all the same solution. We just kinda trying to make sure that we talk fast and we don't waste too much time with the seven words name. So an overview of what you guys are gonna do today. So the first one, what we want you to do is to try our Kafka service, okay? The goals for this session are that you create and provision a fully managed Kafka instance. Second, we're gonna walk you through the process of deploying a Quarkus-based Java application on an OpenShift cluster. And we will give you access to that OpenShift cluster on something that we call the developer sandbox. Bernard will show you all about it and the guy that I share on the comments of your screen has more information about it. And then the third goal for today is connect your Quarkus application to the Kafka topic on your Kafka instance. So what do you need for today? The first thing that you need is you need to create a Red Hat account and have your credentials handy. The second one is that you need to request a Kafka instance on console.redhat.com. And the third thing is that you need to request an OpenShift cluster on developers.redhat.com. It sounds like a dot a lot, but we're gonna be able to manage gladly. Everything is you, it's very wizard based. So it'll be easy for you to create each one of these environments. Links. So the first one, the link on the left is our Slack link. So that's a definition Slack channel that we have. There's a lot of conversations happening in their different workshops. We not only talk about Kafka, we talk about older technologies that Red Hat is pushing to the market, open source, our products is a very nice channel to be in. If you have questions, you have feedback or anything you wanna let us know, please just go ahead, register on the Slack channel and we will be there to talk to you. The second link on the right, it has all the instruction, links, and useful commands for you to run this workshop. If you go to that link and to that guide, just go directly to section four where you can follow all the steps with us. One quick thing, I don't know if I'm taking too long, I'll hurry Bernard because I know we are short in time, but I just wanted to give you some graphic representation of what we're doing today, okay? So on the left side, you're gonna have the developer sandbox. So on the developer sandbox, what you're gonna do is to get a Red Hat OpenShift dedicated cluster. Once you have that cluster in that environment, you can do a few things. You can install your Quarkus app there, it can live there, it can run there. The second thing that you have there is something that we call the Quickstars. And basically Quickstars are just guided instructions for you to follow, so you can complete the workshop today, okay? They're very easy to follow, it's step by step. The third thing you're gonna have on the developer, on these developer sandbox or OpenShift cluster is the CLI. We have the specific CLI for our product that basically has all the commands available for you to create Kafka instances, topics, and even declare permissions or authentication for the Kafka topics. We're gonna bind that Quarkus app with our topic. So on the right side, you have your OpenShift transfer Apache Kafka instance, so your Kafka instance dedicated to you. And in there, you're gonna create a topic and that topic is the one you're gonna connect with your Quarkus app. Three Quickstars are what we're gonna complete today. First one, how to create your Kafka instance. Second one, get everything that you need to make sure that you can connect the OpenShift dedicated cluster to your Kafka instance. And finally, we're gonna do the connection of the app with the topic, okay? So now I'm gonna pass it to Bernard, okay? Who is our person today to show a little bit more about the workshop? Okay, thank you, Jennifer. So good morning, good afternoon, good evening, wherever you are, for me it's definitely good evening. So yeah, let's walk through the different steps of getting acquainted with our Kafka service. So the only thing we're gonna need today is actually a browser. So what you see on my screen is a still empty window of Chrome Incognito. So the first thing that you're gonna do is create a Kafka instance. So for that, I need to go to console.redhat.com, which is our hybrid cloud console. It will ask me to log in, it doesn't. Well, let me log out so that it asked me to log in because that will be probably, okay. So it asked me to log in. So if you don't have a Red Hat account yet, you will have to go through this link here, register for a Red Hat account, then create a Red Hat account, fill in some fields, and then you will be able to go back. I have a Red Hat account, so I can immediately proceed. So let me get the details of my account. So my username and my password. Okay, and this brings me to what we call the hybrid cloud console, for which you can do a lot of things from here. You can manage OpenShift clusters, you can do things with RHEL, but we are interested in our application services today. So if I click on that tab, you will see number of things here. You could do try OpenShift Streams for Apache Kafka. The other way to get there is you click on Streams for Apache Kafka and then Kafka instances. And normally I should not have an instance. I don't have an instance, so I can create one. I will create an instance. An instance needs a name, let's call it definition. It's running on AWS, there is no choice here, but I can choose the region. And I think I'm gonna, because I'm in Belgium, so I'm gonna take Ireland that's closest to me. And then multi-availability zone, that's not really choice neither. So it's multi by default. And then I can, here on the left, on the right, sorry, you see some instance information. So there's just like an eval program. So that means that those clusters will, those Kafka instances will stay up for two days. And they have some limitations about how many connections, how many partitions, stuff like that. But let's say that those limitations are liberal enough so that you can do serious things with it if you want. So the main limitation is that the cluster stays up for 48 hours and after that it's destroyed. So let's create an instance. And now it's creation pending. That should take a couple of minutes and then it should go into the real creation. In the meantime, I can go to the developer sandbox which I'm gonna do in another tab. So, and I already prepared the link here. So that's developers.redhead.com slash developer sandbox. So developer sandbox is like a big shared open shift cluster. And with a Red Hat account, you can get a share of that cluster. So you get a couple of namespaces. In that cluster for, I think 60 days. And you can use that space on the cluster to experiment with OpenShift and the thing. So if I go to that link, I see here get started in the sandbox. And then another link that says, launch your developer sandbox for Red Hat OpenShift. And that should normally bring me to, nope, another one, start using your sandbox. So my sandbox is provisioned. So I can now start using it. And that should bring me to the login screen of the dev sandbox. I just used it like two minutes ago. That's why it does not ask me to log in. But normally you should see login screen with one button that says definition. Just click on that and you should be in. So it brings us to the developer view. The first time, you will probably see a empty topology. So from here, we can go to our quick start. So if I click on the add link here, and you see number of paints here, the one that we're interested today is built with guided documentation. And then you see here like view all quick starts. And then we see number of quick starts. So there are more than just working with strings for Apache Kafka, but we can filter them out by typing Kafka. And then you will see that basically you have two, every quick start is present like two times. This is like a temporary situation. The ones that we're interested in today is are the ones that have like this rocket icon. So there are four of them. Getting started using Kafka cut, connecting, and binding your Quarkus application. So they're in alphabetical and not in logical order. So we are gonna start with the getting started one. So I can, if I click on that, on that, on that pane, so you as a window will open on the side and which guides us through the quick start. So we start by normally inspecting the Kafka instance. So for that, let me quickly go back. But I see that I might have an issue here because it's still, then it should more or less be ready. It's still creating. Let's give it a minute more because otherwise I will have to resort. I have a pre-created instance, but that means that I need to log out and log in again, which is not a big deal, but because it's under another account. So, but this is maybe taking a little bit too long. Let's refresh once more. Yeah, I think it's better if I move to my other account because there I have it prepared already. So let's do that. That means that I can close that window. I can also close this one. Oh, no, there we go. It's ready. So it took four minutes. That's normal between four and five minutes. So I have my Kafka instance ready. So, but now I went out of my developer sandbox. So I'm gonna do that again. So if that redhead.net.com the developer sandbox. Okay, so get started, same sequence here. Start using your sandbox, okay. Back to my developer console and my quick start is still open. So the first thing that we're gonna do is basically have a look at our Kafka instance. So if I go to my developer, I'm gonna go to my developer. Have a look at our Kafka instance. So if I go back to the window, so my, the cloud console, you can see I have a Kafka instance. It's called definition. If I click on this kebab icon on the right, I can see some details about it. So you can see it's running on AWS in Ireland. It has an ID. I'm the owner of it and then in connection, you will see a number of connection details. So to connect to a Kafka instance, you need a bootstrap server, so which is URL. So this is the URL of my Kafka instance, which by the way, I'm gonna copy that because I'm gonna need that in a couple of seconds and I'm gonna paste that in a text editor that I have ready here on the site. From here, I can create service account. So the Kafka instance is secured, which is normal for a cloud service. You don't want everybody just to be able to access it. So it's secured through service account. So you need, and then we use Cecil Oadbeer or Cecil Plain as authentication protocol. So from here, I can create a service account. So let's do that quickly. So I can call it definition as well. Okay, and then a service account has a client ID and a client secret, which you need to copy as well, especially the client secret because you won't be able to get to it afterwards. So we'll copy both of them. I do, I think this is more needed for the second quick start, which we're gonna skip today. If you use Kafka cut, you will need that particular service account in the quick start that we're gonna do, the binding, one quick service account will be created programmatically, but I'm still gonna paste all this, you never know. Okay, so, and then I can say I have copied client ID and the secret, and that gives me a service account, okay? So if I now look into the window of service accounts, you should see I have a service account, okay? So that's kind of the first step. That's, and the second step. So we have created a service account. So we did all this. So I didn't call it my service account. I called it definition, but that's just a name. So the last step of that first quick start is about setting permissions. So by default, so it's not only secured by service accounts, but we also use a role-based access and by default, so basically you can just a service account without additional permission, can just describe a Kafka cluster and not do nothing more. So you cannot start consuming or producing messages from a Kafka cluster. For that, you need to set a specific permission. So that can be, that is done in the cloud console as well. So if I click on the instance on the name of my Kafka instance, this is gonna open a number of tabs here. So, oh, let's, okay, this is not fun. This should not happen. So, okay, that seems better. Let's go to access. So you see this is the default access, which is not a lot. You can describe a Kafka instance. You can describe consumer groups and you can describe topics. So actually you can do almost nothing with that. So to be able to use or actually use our Kafka cluster, we're gonna have to give some additional permission. So we can do a number of things here. You can do that per service account, but here to save us some time, we're gonna give some permission to all accounts. So that means the service accounts I already have plus all the ones I might create in the future, okay? So, and then I can give some permissions here. So I'm gonna give permission to topics. I'm gonna give permission to all the topics. So I will use a wild card here, sorry, a quick card. Okay, so that means all topics. And I'm gonna give it, I'm gonna keep that to low. And then I'm gonna give it all permissions, okay? And I'm gonna do the same for consumer groups. Okay, consumer groups is, yes, so wild, oh, I don't think that's okay. It's just start. I don't have to type the, yes, allow and again, all permissions. So that means I can consume and produce to all topics and use whatever consumer groups I want to and I consume. I can say that. And so this new, those new permissions are added to my access list, okay? So that was basically what is described in this, oh yeah, I could do transactional IDs as well. I do, we don't use transactional IDs here. So this is more optional, okay? So that I've done as well. So I can go to the next step. Oh, this is, if you wanna keep that cluster during today's and maybe do other things with that, just keep in mind, those are very loose permissions, right, so you might want to fine tune them later on. So if that would be production system, you would definitely want to fine tune them and do more explicit permission per individual service accounts. But for what we wanna do here, that's more than enough. Okay, so yes, that's all done. So now I can create a topic. So creating topic can be done from the UI as well. So I'm gonna do more or less what is here, except that I'm gonna name my topic prices because that's the name of the topic that my Quarkus application expects. So I won't have to come back here and create yet another topic later on, okay? So I can go back to my UI here and I go to the topics tab and then I can do create topic and then I have like a wizard. So the topic has a name. That's it, I will use prices here. And then you can choose the number of partitions. So partitions I have everything to do with scalability. So one topic consists of one or more partitions can be hundreds if you want. If you really want to scale out to the max and the hundreds of partitions for this exercise one is okay. So we can leave this to one partition and then the retention time. So those are the defaults for the menace service. So retention time is weak. So everything that you post, that you produce to your Kafka topics will be there for a week and retention size is unlimited. So that's fine as well for those quick starts. If you do real stuff with it you might want to fine tune this as well. A week is more than enough here because my cluster goes away after two days anyway, so okay? So I'm gonna keep that. And then this is not something I can change today but like every topic has every partition, every topic has three replicas and I need to a minimum of two instinct replicas to get an acknowledgement when I produce messages. Those are fixed values for Kafka streams. So if I do finish and I go back to my topics pane, you see that now I have a topic called prices. And so this concludes this part of the quick start. So, and I think the complete quick start. So yes, the new Kafka topic is listed in the topics table. So we can do next. So normally the next quick start is using Kafka cut before the sake of time we gonna skip that one. So I'm gonna, so normally what it does it deploys the tools image so you can get access to Kafka cut. You use Kafka cut which is like a command line tool to work with Kafka to connect to your Kafka instance, you produce a message and you consume some messages. So you can go through this in your own time later on if you want. So I'm gonna quickly click through this. Yes, next. Well, I can actually go through the last one and do yes. And next, and that brings me to the next one which I'm gonna go through and that's start connecting your Red Hat OpenShift string to Apache Kafka to OpenShift. Okay. So for that, we need to do a number of things. We need a tool for that. So we're gonna use the ROSE CLI command line tool. So ROSE stands for Red Hat OpenShift Application Services. So normally that's a tool that you would install on your own laptop and just go from your command line. In this kind of in these sessions here we want to avoid that you have to install anything yourself. So actually we're gonna use a prepared image that you're gonna deploy on the developer sandbox to actually, and that has the ROSE tools and other tools installed so that we can use them from there. Okay. So that is the first step. So you should check that you're in the DevN so in the Dev project, so you have two projects when you're in developer sandbox you have like ones that's called Dev and another one that's called Stage. So well, actually it's this one DevNation Dev that I need. Yes, that's fine. So and then from the ad pane here in the developer thing I can do I should be able to, whoa, there used to be, no, that's it. There used to be a pane that allows you to add a, add a, or is it this one? It's not, no, it's not samples, no. Okay, this is not fully expected. There used to be a pane where you could add directly from an image. Okay, this is, okay. Let's, well, that's annoying because if we can't do that, it's not because I'm in the wrong project, right? Because my view is a bit weird here as well. I should not see those other projects. Let me refresh this and see if that's, that's not really, oh, I can see it from here. Well, this is very, this is a bit confusing because this does not correspond to my, to the user I'm normally logged in, but that's not fine. So you should see this screen, okay? And this is the pane that I'm looking for. So add from container images, okay? And then I can paste an image and the name of the image is in the quick start. So it's this one, okay? So let's do this. And he's gonna check that the image exists and I can leave all the rest deploying this fine. I don't need a route because I'm gonna use actually eternal directly in that container. So I can create that deployment and that should now deploy a container. So it's downloading the image. That's what take couple of seconds. And then from the moment the circle is dark blue, my container is deployed. And from there, I should be able to go into a terminal. So if I click into the circle here and I go to the pot, that's the details of that pot. The last step here says terminal, okay? That opens a terminal directly in that container. That container has the road CLI so I can do those version and it will give me the version of the road CLI that is installed. So we can now use a road CLI from within that window, okay? So I've successfully accessed the CLI too. I can go to the next. So here we're gonna actually use the road CLI to connect my Kafka instance to my OpenShift project. So we're gonna do something which might seem a little bit funky here because we're gonna have to juggle a little bit with tokens. So normally the road CLI, when you use that on your laptop in a terminal, it will do a browser-based login because it uses OIDC and OAuth. So it's gonna do a browser-based login from within a terminal and OpenShift, we cannot do that. But we can also login with a token, that I can obtain from console.reader.com. So if I do here console.reader.com, I'm gonna paste that, open a new tab here and then do OpenShift, open it on a, show me that window here and then if I click load token button, it should give me a token. So that's the token that we need. So if I copy that token and I go back into my tools container here and I do rows login token and then I paste that token, I should log in. And I now logged in as, Peter has one definition. So friends, now I can say, I can verify that I can reach my cluster. Yes, so if I do rows Kafka list, you can see my cluster here. Okay, so another thing that we need is we need to be logged in in OpenShift itself, which doesn't happen automatically when you open a terminal in the container. That container is not, is that terminal is not logged in into OpenShift. So we need to do this as well. And for that, we need another token and that token is an OpenShift token that we can get directly from OpenShift. So if I go here where my name is in my OpenShift dedicated and I do copy login command, that's gonna open a second window. I can log in and then click on display token. And then I've got like this whole login command that I can copy. And this is what's gonna log me in OpenShift, okay? So paste this command, paste, okay. I logged in and this is now annoying because this now says that I'm in another project. Okay, guys, I think my browser is confused here. I'm gonna log out out of my DEF set, sandbox and log in again because this is not normal. So it's confused between my two accounts. So let me log out, okay. Let me log in again and then let me see what as which user I'm logged in. Now I'm in the definition one, so that's better. That means that I will have to redo the, because my topology is probably empty, okay? So let me very quickly redo the thing with the tools image. So let me first get to my quick start so I can copy, paste all things. So, where is it? Let's do Kafka here. Okay, I was not binding, but connecting. That's the one. Access requires your light tools. Add from a container image. I can, so this is gonna take like 20 seconds and I'll be on my way again, okay, good. And I need to route, let's do this. Let's, yeah, I need to download that image again. Okay, that was quick. So here, okay, my path, my terminal. I'm gonna redo the rose login token. I still have my token window open here so I can re-copy that one and paste it, paste. And I should be logged in. Rose Kafka, best, just to check. Yes, that's fine. So now I can, I should be able to do this one and everything should be fine. Paste, and there we go. And I'm in logged in into BTSOM, DevNation, Dev, which is my current project. That's already a lot better, okay? So, I don't know this. Yes, so next, I did the OC login, yes. So now the next thing I can do is actually do a cluster connect command which will actually connect, actually it will create a custom resource called a Kafka connection in that namespace that contains all the details for application to be able to use those details to connect to my Kafka instance. So if I do rose cluster connect here, rose cluster connect. I'm gonna ask what I want with type of service. So apart from managed Kafka, we also have managed service registry but today we're using Kafka. I'm gonna connect to my DevNation instance as the name of my Kafka instance. And yes, I want to continue. Then that requires the same token, my offline open shift to cloud.red.com token. So that's this one, okay? So I'm gonna paste that token here, okay? And now I do, so this is gonna, so it's gonna create a number of things. It's gonna create a token secret which has this token. It's gonna create a service account secret which has the details of a service account. So it created this command, this rose command created the new service account for me. It has already permission, so I don't have to do that. All my service accounts have permissions. And now normally it's gonna, it's gonna wait to create a Kafka connect instance in the meantime, while this is ongoing, I can indeed check that in my service accounts here. I have a new service account. Let's refresh my window. Okay, so that's the service account that was created like one minute ago by my rose command. Okay, this is not good. To continue to serve on it, Kafka resource definition has been created. So normally I should have OC get connection, I have one, and I can do, it's called definition, it's called YAML. And this looks okay. So I'm not sure Roy said that there was an error. That's all kinds. Yeah, that sounds good. So let's continue. Okay, so now I'm ready to actually start using this custom resource to bind applications. Okay, so yeah, you can inspect the Kafka connection already did that. I can do, oh yeah, another way to do it is OC describe, but that's probably do, let me do that, let's see. We'll see the describe Kafka connection. It should give me the same, but no, it's not good. It's missing a lot of things here. It apparently cannot connection. Normally I would see here the bootstrap host and the whole thing. So something's definitely going wrong, but I don't know what. Let's see if I can bind that again. I need to see what it's Kafka. It's a nation. I want to continue, yes. And now it says it already exists. So let's do it, we need to have a connection. The nation, yes, yes, continue. And now it's actually trying to connect. That seems to be a bit of an issue here. That shouldn't take so long. Unless there is something, no, this says it's ready. Let's see if the ID, C6, yeah, that's the one that I have. Now, this is definitely not going as expected here. That's annoying because that breaks the rest of, we cannot continue without that CR, no. Let me see, I'm gonna delete it once again and then do with dash V and maybe that gives us a clue what's, not actually V. Well, I can do, can I do that to continue, no? I cannot do that to continue, I really need that. Well, I could do it from command line. I can do it from command line. Let's see if I can do that if I connect, if I use that rows, I have my rows here. Let me first log in into the login command. I can use that from refer to because that's something I need to do for because I have like a bit of weird environment here. Okay, and then I can paste that command. So that should, yes, I'm logged in here, precise. So I can do, and I can do a rows. Login token, yes, and I should be able to do rows. Next, of course, it's just to connect. If that works better here, rows. Okay, now he says I cannot find my code. Look at this, this is not the same. I have the impression that this whole thing is horribly confused, rows, logout, I'm gonna logout. I think we're a bit stuck here. So let me check my couple connection that I had here, C6 and then 24G. Yes, that's the correct one. So I can't connect to that one. Toot, toot, toot, toot, toot. Okay, let's see if I can have a suggestion in the chat here. Let's see if it's that one, but paste, but I'm pretty sure that, whoa, offline user section not found. Okay, I think I need to, could it be that my token is not valid anymore? Let's refresh that window. Okay, this is, oh my God, this is over interesting. Okay, let me find my user. Let's see, that's the correct one here. Yes. Okay, I got another token logged in. Let's see, delete. So let's see, okay, I'm gonna delete that one again and try another time, that doesn't work. Then there is something more serious going wrong with this whole setup and be able to continue, I'm afraid. Connect service, good, that's nation. MBTIS, no, that doesn't seem to, that takes too long. So in the meanwhile, I'm chatting with one of my colleagues and he says that he can recreate. So apparently we have a general issue here because Evan can, my colleague just has the same issue at the moment. So it seems we have a general issue with our service because we're not able to really connect to it, it seems. No, that, I can try once more from command line. So let's do a log out first. Even log out doesn't work interesting here. And with my new token, name, definition, yeah, that's 24G, that's correct, yes. So let's do it from here then. So we'll see, let's see which project I am. Yes, so I can see, let's see, delete Kafka connection, that's nation, okay. And now we can do rows, connect, yes. Now he says that he's looking for a different Kafka instance. That's, oh, with that I think I know how to fix that. If you give me just one second, I think I have to do the Kafka use here as well. What was that here? He's probably still thinking that I'm using an older one. So let's see if that works. Yay, you want to continue, yes. And let's see now, he manages to connect because if that doesn't work then we have a general issue, then we cannot really continue. Now, so this binding thing, there seems to be an issue here, which we're gonna have to investigate. Now what I could do, at least try to do, maybe that will work. I hope that that will work is use, go back to the Kafka cut, quick start and quickly see if we can connect through Kafka cut. So then at least we have, yeah, because this is definitely not gonna work. So let's go back here and let's go back to my quick starts. I had, sorry, had quick starts, quick starts and let's do the Kafka cut one quickly just to make sure that we can, kind of Kafka cut is on the same image. So I don't have to do all this. I should be able to do right here in the, so in the terminal, I should have Kafka cut, yes. So the two is present so I can use it. So normally what I need, yes, I can use yes. So I'm gonna set some environment variables with the things that I copied before. I'm gonna use my bootstrap server and I've copied it here. Okay, I need the user, which is my client ID from the service account that I created before. It's good that I created that, no? So user, that's this one, paste, yes. And then export the password which is a secret, paste. That's all this, and then, yeah, yeah, that does work. And then now I can, to, I can produce. The only thing that I'm gonna change is my topic is called prices. So let's use prices here, prices before I copy that command. Prices, before I copy that command. Okay, so that should connect directly to my bootstrap server using plain sassel with username and password coming from my service account and in producer mode, that's the P here at the end. So if I click enter, I should now be able to produce to my first message and second message. So I'm now sending messages to my prices topic. So that seems to work fairly well. So, and now I can do the same commands. So I get like a couple of messages and now I should be able to consume with Kafka cut as well. So actually it's exactly the same command except that I'm gonna, at the end, sorry, I need to go like this. Okay, at the end, I need to change the C, P in a C for consumer mode. And that should normally then connect to my Kafka instance and consume my three messages. Yes. So I can connect, it's not my Kafka instance that has a problem. It's apparently the operator that creates the Kafka connect instance. So the rose operator apparently has an instance and this you hear because it cannot really connect. So with that, I think it doesn't. So the actual binding, I won't be able to demonstrate because I need a working Kafka connect instance for that that has all the metadata, which it's apparently cannot do. So that means that we can as well stop here, I think. So I'm sorry for that. So normally you should have a Quarkus application that automatically binds to that Kafka instance and starts consuming and producing but it seems we have an issue with our operator at that handles all this at the moment. So that's it for me. Again, sorry for not being able to actually show what I wanted to show. That's a bit how things go in life, I think with demos. So yeah, that's it for me. Yeah, don't worry, Bernard. Even at the same issue he was trying it and DJ Maddy on the chat, I think he had an issue too. Yeah, I saw that as well. So let me bring maybe more info about it. Not yet. I did ping some people to ask them, could they take a look at the operator logs or its status in the sandbox environment? But I haven't heard back yet. But that's what it true looks like. It looks like the operator is maybe having some trouble like Bernard said. Yeah, yeah, that's probably true. I was trying myself as well, but I was trying the other way because you can bind now the Kafka instance with your cluster directly from the sandbox by clicking Manage Service. And then you just select your Kafka instance. It's really nice, by the way. You don't have to do Kafka connect cluster anymore. But so let me quickly show that. How do you do that? You can drag and drop. In topology, you do add, no, add, sorry, add. Yes. And then you go to, you scroll to the- Manage Service, right. And then you unlock it with your token. That's the token. You know what? I'm gonna do that in my other namespace here. Yeah, you can do that in your audience. So you just unlock it with your token, the token is open now. I still have my token here. And then you will see it's magic. It will list your Kafka instance, the truth. So, next. But maybe you will face the same bug here. I don't know. So what should normally happen now? Well, that's the first time I do that. Yeah, I discovered that this week as well. Well, once you have done that, then you can just create your Manage Service Connector and it will find the cluster for you, the Kafka instance for you. You just have to drop down, select. But it looks like it's... I think it has a bit of an issue here as well. I think the issue is that the CR is not resolving or not finishing. Yeah, it must be something like this, yes. Yeah. Okay, yeah. Okay, so, yeah. Yeah, time out. So we got definitely connection problems here. Connection problems. But we have reported it and you should. Yeah, yeah. Yeah, so DJ Maddy is not also not able to bind it. So try tomorrow, everyone. And it should probably work. Yeah. We're going to report it then. Yeah, actually, if the operator comes back to life properly, right? You shouldn't even need to run another command. It should just update automatically, actually, the results. Exactly. It should work on style and yeah, yeah. Okay, hey. Anyway, Bernard, thank you so much. You're welcome. And thank you, even for showing up and say hello. And thank you, everyone. And don't forget to go on the Slack channel. That's the place to go where we can chat, share links and stuff like that. See you next time. Bye, everyone. Okay. Bye, bye. Thank you. Yeah, bye.