 And we're live Hello, my friends from everywhere in the world. Welcome to this definition workshop. We're so glad to have you today here I know it's good morning afternoon evening somewhere in the world So we're super glad to have you here and this definition workshop We're going to be talking a bit about Kafka about open-shift streams And I'll be joined today by my colleagues and friends Jennifer Evan and Bernard you'll be seeing them very shortly and What is today's agenda? We're going to learn a bit about Quarkus and Kafka I'm going to show you a very quick demo how about how exciting can be a Quarkus and Kafka can be together Then later we're going to learn a bit about Red Hat OpenShift Streams for Apache Kafka We're going to go through the workshop overview and log in details. We're going to go through the Perform a walk through for the quick starts and then you'll be able to perform the things by yourself We have a lot of time for Q&A and last but not least the next steps What's next regarding the Red Hat OpenShift Streams Apache Kafka and Quarkus as well? Okay But let's talk first about the technology I want to show you how Quarkus and Kafka can be super exciting So I'm going to skip the slides for now and I'm going to show you know your new favorite website on the internet Which is Quarkus.io Quarkus has new release every new releases every other week and you can see here that here in this particular Image Quarkus won the 2021 Stevie winner award like a best framework or something Which means that yes a lot of people are paying attention to Quarkus and how amazing Quarkus can be But for today's experience, we are going to start with the top right corner of the screen We're going to click the start coding button, which is going to start our amazing experience Okay, so Quarkus today, we're going to use version 113.7 final And the only boring part of this presentation is that I'll have to type the group ID com red hat developers and Artifact ID, which is going to be Kafka workshop What else I'm going to need some Rest extensions because everybody's doing rest and points to these days. So rest easy and reactive So I want to use rest easy reactive because we want to be super performant when using Quarkus and Kafka, so I'm going to add some reactivity here I also want to show some JSON maybe and What else I also need the Kafka stuff So I'm going to add the small right reactive messaging Kafka connector because we want to do that all the way And I believe that's all that I needed for now. So I have some options here I can generate my application downloading a zip file pushing to GitHub or sharing a link with you So I'm going to download the zip file for now So yes, the zip file is already on my machine and let's move to the terminal So you notes that I hear on this particular tab already have a Kafka instance working on my machine You'll be able to do the same later using the the red hat of the ship streams for Apache Kafka But I'm just doing for local hosts because I don't want to spoil the surprise you have later the good surprise you have later and Let's see if I'm in the right folder. So let's unzip the file that I just downloaded Kafka workshop You see that this is a plain old a maven Java maven project we could be using greater as well, but I'm a maven user because of bad choices and Some of my favorite features of parkus actually one of my favorite features is when the magic happens so I'm going to type the magic words MVN quarkus column dev and In this particular mode quarkus is going to start in developer mode and is going to listen for changes in the file system So whenever I change for example Java file or properties file or any other kind of file Quarkus will trigger a restart and it's going to happen super fast You barely even know it's when I'm not live streaming content such as today Julie Quarkus takes about 200 milliseconds 150 180 milliseconds to be able to restart and I can tell you that For my previous previous experience using Node.js It's much faster than Node.js and of course as something can always goes wrong Quarkus released a new version and I didn't download it previously to this demo That's why maven is downloading the internet in front of your eyes and I think it's going to take just a few Seconds more, but while we're doing that we can always open another tab and now we're going to open my ID game Waiting more. Oh now even my terminal is low Yes So, let me see if I'm on the right folder. Yes, correctly So let's hope open my new fire favorite ID these days, which is Visual Studio codes for Java with the Java extension and the Quarkus extension, which are both provided by Red Hat And I'm waiting Okay, you didn't see that because my Vs codes. Oh God apparently I have a new version today as well. Do I trust the authors? Well, I think I'll have to trust myself and yes, I'm trusting myself. So let's show you the screen Yes, very big fonts here I have the my project you notes that I even have a new reactive point here Which is generated by default and let's go back to the browser and see if my project is already working So local host 8080 Yes, my new cloud native application is ready and I could even go to the hello rest easy Reactive the point just say that yes hello rest easy reactive is working. So what do I want to show you? I have a Kafka broker so we are going to be sending some messages to my Kafka broker and I'll be able to consume those messages in a reactive way in a Real-time way using Quarkus and Kafka for me to be able to do that I'm going to create a new endpoint here a new file, which is going to be a message resource Correctly, of course message resource dot Java and it's going to be a class. It's going to be needed a path And yeah, things is kind of slow today my own machine But it's going to be perfect. It's going to be message and I can import the right class later And while I'm doing that I can say create a new endpoint a public We're going to be using a new reactive type called called multi then I'll be able to Show you the message is on the fly Let's keep it simple. I'm going to return a string strings and this messages and what else do I need here and you know another path and Actually, it just needs to be a get endpoint Going to produce Let's do a server cent events Type and RSS element type is going to be media type Dot explain Okay, so it's going to be several cent events and my service and events is going to be text playing and That's all I needed for my end point and what is what is the rest of the magic happening? Well, I just need to declare here another multi which is going to be injected automatically automatically or automatically if you prefer and If I just add this annotation channel and say that the name is going to be message input because that's the At the string that's a stream that I'm going to receive from Kafka and return the messages But I just injected and that's it That's all I need to do to be able to create a reactive rest and points that is receiving messages from Kafka and Showing that on the screen on my browser using server cent events. What else do I need to do? Well, I never told my application. Where is my Kafka broker? So I can do that by using my application properties file So I'll have to type some properties which can be lengthy But luckily we have our ID which is able to out complete most of these properties for us and just because oh, yes Kafka small right Kafka bootstrap servers and this one is running on a different port to 1992 and then I'll have to configure Messaging incoming And I think for the sake of time I'll just copy paste those properties because I don't want to entertain you with my typing But I want to entertain you with work is in Kafka. So what did I had to do here? I just had to say that I created a new reactive stream using Kafka The name of the topic that I'm going to consume is message and because of the nature of Kafka I can use many different serialization formats in my messages I chose to use a string as as does the deserializer of my particular messages, okay? And I think that's it once I did that. I'm ready to connect. So if I go here to my browser It's I look a host 80 80 slash message You see that nothing is happening, but The name messaging put I didn't find anything messaging. I might must have mis-typed It should be good. Yeah, sometimes these things happens because while Quark was restarting but once you've noticed you just ignore the error message because here on the top of the screen You see that my browser is already connected to my server set events and points and is waiting for something to come from Kafka. So let's Split the screen here. So I'm going to go back here to my terminal terminal And how do I send messages to my Kafka broker? Well, I'm going to use the CLI command called Kafka cat So if Kafka cat I can just specify that my broker is on this host and port tonight and I to and I want to Connect to to the topic called message and I want to connect as a producer. So I just did that and By doing and typing enter here. I Should be connected to my Kafka broker now Okay, and once connected I just can just type stuff and these things will go through Kafka and luckily and hopefully These will be received by my Quarkus application on the right side of the screen and it will show on the fly So let's start with some hello And as I said, hello, something should have happened here and let's see if working and Just because I mentioned that let me stop and start Quarkus again because you know, we're software development sometimes It's always a good choice for you to be able to restart your stuff And I've just sent something Hello And we should be good Connecting again waiting for it a Stream with the name message input available streams are that's that's weird. Let's see what I'm missing here I Know did I tap the wrong channel? Message input. Oh I never saved the file. Oh, come on. You know that you have to save your files. I never say the file That's why Quarkus is complaining So it wasn't Quarkus fault. It was my fault. Now you should be working. Okay, let's go back to the terminal and Let's see if I say hello Yes, hello, it's working. I can add some French bonjour. I can add some Portuguese or Spanish I can say hola and since I'm running Quarkus on dev mode I can go back here to my visual studio code And if I just decide to go to the message resource see if you ever use it Java 8 streams before you know That's the multi type is very similar. I can even map the messages So suppose that I want to show everything as uppercase right now and I can say string I'm going to use a method reference here string to uppercase and just by doing that I can go back to the browser and back to the terminal and just send a refresh here And if I send hello again, oops, I was too fast. So hello, it's already kept on here I can add bonjour. I can say some Hawaiian Aloha I can say some hola and you see that Quarkus restarts on the fly You have this amazing experience and it's super easy for you to be able to play with Quarkus and Kafka Once you save the files that you just edited. So thank you very much I hope you enjoyed this particular demo and remember this is just a start. This is just the beginning We have much more Cool stuff to show you during this half an hour of this amazing Quarkus workshop and to help you With if with the rest of the workshop Inviting Jennifer to join us on the stage Hey, Edson Wonderful. Thank you so much. Thanks for that demo on Quarkus, so I'm gonna go now and share my presentation give me a couple of minutes here while I move the screens Um Welcome everyone. Thanks for joining us today as Edson said, we're gonna walk through these wonderful workshop I hope you enjoy it as much as we enjoy preparing it for you. So The first thing I want to introduce today is red hat open shift streams for Apache Kafka we launched these service during a red hat summon event in April and It's right now is a development release development preview release and we are working working towards the GA of the product so I Just kind of wanted to put like a situation here like a baseline for all of us So basically red hat is expanding its open hybrid cloud technology portfolio with a new set of managed cloud services that include platform services application services and data services, so all of them are The main objective of the main benefits is that we want to provide a full sack management and unified experience We're hoping to maximize the full value of red hat open shift as well as support Accord as support everything across the hybrid cloud environments So on the bottom you can see here the platform service as one of our first Services that is our base and basically what we have there is that we provide fully managed open shift on the cloud provider of your choice We all see and you can see here. We currently support a few options for our customers The next layer or on top of that we have designed a new set of cloud services that are natively integrated with open shift And the goal is to deliver a stream line developer experience that would allow not only developers But also the organization to be able to build deploy manage and scale cloud native application applications across hybrid environments, but What is really red hat open shift streams for Apache Kafka? You're here for that today and it's a fully hosted and managed Kafka service for a stream based applications And this service was designed for IT development teams that want to incorporate a streaming data Into applications to deliver Experiences in real-time and also to improve application velocity So what isn't really in for you and for the organizations or for any developer or organization that wants to start using this today? And the first thing is faster application velocity One of the things that we give you is that as soon as you and you did it You have done it in the last few minutes that we've been here in this workshop. You were able to go to the links that we gave you on the workshop guy and Create create your own Kafka instance, right? So that's what we're trying to do give you an environment where you can quickly go and Start developing immediately start making the change immediately is designed for developers The second one is that we're providing you a unified experience across all clouds Our goal is to make it very easy for you that when you're using Kafka or using any of our managed cloud services that you can seamlessly connect applications across Publix and hybrid clouds so you can actually respond to these Hybrid cloud cloud experience or make it easier for you and your organization and finally we know that Kafka is not only Kafka We are working very hard on delivering a Kafka ecosystem that really allows you to Deliver this stream-based applications, okay, so we are working on a curated set of cloud services that will simplify that work So what is in the proverbial box for this managed Kafka solution? And the first one is that we provide you with a Single tenant instance that is fully managed. Okay, it's fully hosted and managed by Red Hat And so as you can see here in the box the main thing that we have is a Kafka cluster But there is much more than that for our Kafka instance and that's what we have built as an organization So you have metrics and monitoring so you have configuration management. We have also worked very hard on the UI Experience for the developers you have a UI you have a CLI We are also exposing exposing the APIs of the application so you can potentially create connect to your own CLI And also included something very important that we will be talking later on today, which is a service binding So the service binding is an operator That's gonna be the one that is responsible and allows you to control the communication between an OpenShift cluster and your Kafka service Okay And on top of that besides everything that we added to this box. There's one thing that's very important is that we Are committed to provide a 99.95 percent SLI and we and we also offer or we're including 24-7 global premium support Besides that to Talk a little bit more about the CLI we have something that's called the Red Hat OpenShift application services CLI It's a very long name. We know that it's called a CLI for now You will also hear our instructors calling it Rose CLI But basically it's a command line interface that will allow you to develop allow developers to manage Application services or anything that has to do with the cloud services So you can do it from a terminal and it allows you to do basic and advanced commands So you can create a Kafka instance. You can create a service account You can do a different update delete view topic list among many other things So right now this Version of the service as I told you is development preview And we will be working for final GA version by the end of the year One thing long names short names names the reason why I add this is because We know Red Hat OpenShift strings for Apache Kafka is the official name. It's a bit long So sometimes, you know, you get tired. You want to move fast. We You're gonna hear us use things like Rosak, which is the acronym sounds kind of weird It doesn't flow that easy. So you might hear people talking about managed Kafka or just Kafka our preferred name is OpenShift strings because it's very close to our Product name and it's a preferred short version But I just kind of wanted to let you know that this might happen. So a couple of things Before we keep going make sure that you review the documents that the team is sharing with you on The chat so you can keep creating your Kafka instances. We're gonna walk you through that anyway later on and the second thing I wanted to say is You can help us out with the research. You can help us out with the UI You can help us out. We can capture your ideas. This is your opportunity if you go to this link here You can also use the QR code Come please share with you your feedback your experience. Let us know what you think. This is very helpful for our UI team or UX team One of the things that we are putting a lot of effort with these cloud services is making sure that There's a developer experience and their experience is for you and we would love to capture your feedback and your thoughts So why are we gonna cover today? And I hope I didn't take too much time with the intro I know it's important But I wanted to make sure that we all level set on what is Kafka and why are you here today, right? So the first thing the agenda for today, we are gonna get started with OpenShift streams, okay? We also want you to get started on getting the developer sandbox For your operation environments, you're gonna have two environments your Kafka and your OpenShift environment We are going to walk you through how to you can use Kafka cut with OpenShift streams something similar to what Edson showed before a different use case, of course and Then you are gonna be able to connect OpenShift streams with the Rose CLI We are gonna help you out to bind the OpenShift streams to The OpenShift environment itself You're gonna deploy your Quarkus app and at the end of the day you're gonna bind your Quarkus up to your Kafka instance So you can actually Send information back and forward So let's make sure you have all you need for these workshops first make sure that you have your red hat account And you have your credentials handy second being let's be able to request a Kafka instance You can also go and request an developer sandbox as pretty easy as well. All these instructions are on the workshop guide Regardless of that. I'm gonna move you now to Bernard who's gonna walk you through these steps as well as Evan we are all here for questions, please let us know how you're doing Bernard are you here with me? Yes, I am. Do you hear me? Awesome. Yes, I can hear you Do you want to share or do you want me to keep asking I can share if I find the button Is at the bottom at the end where you have your mute button? Yes That's the monitor there Share Do you see my screen? Walkthrough yes, I can see it. So I have just a couple of side. Thank you Jennifer for the intro. So I have just two slides and then we will dive directly into the practical stuff. So What we're gonna do today and Jennifer already mentioned that we basically have two environments We have what you see on this light on the right side. We have our Kafka service Which we asked you to ready provision now if you didn't you can do it together with me in a couple of minutes So I walk you through it. How you how how you can do it? So so we're gonna provision a Kafka service running on cloud rather.com somewhere in the cloud and Then we will have another environment Through which we're gonna interact with or Kafka service and that's the developer sandbox now for those who are not familiar with developer sandbox So it's a open-shift environment, which is geared towards developer And basically through if you have a red had a kind ID you can get free of charge During I think 30 days and then you can do it again. Obviously you get like a piece a of a shared open-shift cluster so all for yourself and then you can you can let's say do some PUCs play around with with an open-shift environment. Okay, so obviously this is really targeted at Initial development phases. So for you to play around with things This is not meant for production as I said after 30 days It goes away and you will you would have to restart your your deaf sandbox But it's a very useful tool. We think for getting people acquainted with with open-shift. Okay So that's what we're gonna use then to deploy a quarkus app and stuff and then connect to to the Kafka instance so we're gonna do this by guiding you through for quick start which are available from the deaf sandbox and Which will bring us through for let's say small exercises which Have like a logical sequence. So we're gonna start with getting started. So meaning provisioning a A Kafka instance creating a service account creating the first topic then we gonna use Kafka cut to connect to or Manage Kafka instance and produce some messages and consume some messages So that's what I'm gonna do then I'm gonna hand over to Evan and Evan is then gonna show show you how you can use the row CLI to To connect to your Apache your Apache Kafka instance and then finally how you can Bind a quarkus application really easily to a manage Kafka instance So those are like the four quick starts that I'm gonna Me and Evan are gonna walk you through and I think that's what I have for slides so let's get that out of the way and let's dive directly into Into provisioning a Kafka instance. So we have like a a vanity URL, which is which I'm gonna type in here So that's HD dry Kafka and if I go to that one, this will bring me directly in In a certain page on the developers read.com on that page. I see a nice Red button saying create a Kafka instance Which I'm gonna do And this brings me will bring me to cloud read.com but first I have to log in so With my With my red ads account ID And then I need a password that I don't know by heart. So I need to look it up. There you go, right and This will bring me to the Kafka instances view of so we are on cloud read.com in the application services Perspective and there you see we have like a strings for Apache Kafka Menu and the first item of the menu is Kafka instances and you see I don't have a Kafka instance at the moment to create one. I click the blue button and This will open this This pop-up which asks me for Kafka instance name. So that's just a name So let's call it definition Then but this is not so I'm not sure if Jennifer Mentioned that at the moment. So the service is in what we call development preview So the choices and suffer a little bit restricted at the moment and as part of this development preview program Everybody with a red account ID can provision One Kafka instance totally free of charge. No credit card required With some limitations that the Kafka instance will stay up for 48 hours And there are some other limitations with regard to ingress storage and and things like that Which are enumerated on the right-hand side of this pop-up So But You you so everybody's entitled to create one Kafka instance. So that's what I'm doing now So I've given the name. I cannot choose my cloud provider. It's gonna run on Amazon. I at this moment I also cannot choose my cloud region. So in the context of the development preview, everything runs in Virginia and Also the availability zone, it's multi by default So the only thing you have to type in here is your name the name of your instance and then you create you click create instance and then you will see that The instance is listed in the Kafka instance of view and it says creation pending and this will normally take a couple of minutes You'd not take more than three max four minutes. So while this is going We can move to I can guide you through getting to the developer sandbox So for this I will open a new tab. I need to So the URL is that I'm gonna go to is developers.red.com developer-sendbox Okay, so if I click there You will see that. Oh, yes I come to the to the landing page for the developer sandbox with a blue button here get started in the sandbox So let's do that and then go to another page that says launch your developer sandbox for red open shift Which I'm gonna click as well and Then you see that the the view changes says now start using your sandbox So if you do that the first time that might take a couple of seconds to provision your share of that of that open shift cluster I've I've done that before for my account. So I can start using it immediately So start using your sandbox and this will bring me to the login screen of Of open shift and to log in I Click on the dev sandbox button here and This will log me in Into my Dev sandbox so it uses the same ID that I used to log in into open shift streams for Apache Kafka. So my user ID was bt is when dash dev nation and I have like a namespace which is dash dev I have another one that's called dev stage, but we're gonna do everything in dash dev so that's now I'm on developer sandbox and and What we're gonna do is we're gonna go through those four quick starts so to get to those quick start You have this quick start card If you click on view all quick starts You will see there are more quick start than just the Apache streams one and I think the kind of of alphabetically Ordered so I need to find the first one which is Get started with Apache open shift streams for Getting started with that redhead open shift stream for Apache Kafka And if I click start a tour this will bring me to the first screen of that. So let's go back first to my open shift streams view and My cluster is still being created. So let's give that Hopefully not too much time. No, there we go. It's ready. So now I can start using that that Kafka instance So let's go through the first quick start. So where are we gonna do three tasks? We're gonna inspect or Kafka instance We're gonna create a service account and then we're gonna create a topic So the first to go to a task to click on the link. I'm not gonna read all this So we I've already basically done the first step. So that's create an instance So if you don't that together with me, your instance should be ready as well If you don't it before you already have an instance. So I can Skip that first step Check my work is the new Kafka instance listed in the instances table. Yes, it is. Is the state ready? Yes, it is. So I can go to the next step. So the next step is create a service account. So The managed Kafka instances are secured with service accounts So and a service account basically is a username and a password. We call it a Client ID and a client secret. So you need one of those to be able to connect to your Kafka instance So that's the next step. You need to do is create a service account So I'm gonna do that directly in my streams view. So going back to the UI So if I click on my on the line of my Kafka instance, I click the three dots icon on the right You will see that the middle Menu items view connection information If I click on that this pop-up and this pop-up first of all shows already a very important thing That's the bootstrap server, which we're gonna need to be able to connect. So obviously to connect the Kafka I need a URL a And so which is called the bootstrap server URL and that's this one So I'm gonna copy this by clicking on this copy location And I'm gonna paste that in a text editor that I have ready for this so that I can find it back easily Afterwards, so that's done. And then the next step is creating a service account. And for this, I'm gonna click the create service account button which is opening a Other pop-up so a service account has a name as well I'm gonna call this one dev nation just to be consistent I can give it a description, but I'm gonna skip that for the moment and Then if I create this will open yet another dialogue In the meanwhile the service account is created and then you see like two important items That's the client ID and the client secret and what's really important is that I have to copy the values of those Especially the client secret because once I close this window, I cannot get back to it So I'm gonna copy both items so the client ID and paste them in my text editor and I'm gonna copy the client secret as well and Paste that so we will need both so they so we're gonna use those as username and password to connect your Kafka instance Later on so once I have copied them. I Can click I have copied the client ID and secret and I can close that window and Now I have a service account. So if I close this pop-up as well and on this on the left side menu, I go to service accounts Which is just beneath Kafka instances. You will see that my My service account is listed here. So I Can I can potentially Reset it and this will create for the same ID a new secret, but I can't get to the secret again So if I really forgot about it, I can always create just delete the service account as well and create another one in the context of the development preview Program, you are limited to two service accounts per per Kafka instances Obviously when we go GA those limitations will be a lot more lenient so And now I have a service account. So that was the next step I needed to do so back to my quick start. So I've done all this I copy the generated client ID So now we can Go to the next step which is creating a Kafka topic. So next check you work Did I save the bootstrap server endpoint? Yes. Did I save the client credential? Yes So I'm ready to continue and now the the last step in this quick start is Creating a Kafka topic. So without going too much into detail about how Kafka works So Kafka is a messaging platform streaming platform and basically You send and consume messages from topics. So without topics your Kafka cluster is Fairly useless. So to be able to send messages or to consumers that you send them to a topic and you consume from a topic, okay, so to be able to do something useful with or Kafka instance, we need to create a topic And this can be done from the UI as well now just for For completeness, you can do all the same thing with the rose CLI as well But because I guess for most of you, this is the first time that you interact with OpenShift streams The UI is probably the easiest way to get started once you get more familiar You can do things or if you prefer command line, you can do the same thing to the rose CLI and some of those Functionalities Evan is gonna walk you through but I'm gonna do that through the UI. So back to the UI back to my Kafka instance Instance overview page. So if I click now on my Kafka instance, this should open another window Which will show me the topics and as you can imagine because I didn't create any topics yet this overview window is empty and It presents me me with me with a button create topic If I click that topic This will guide me through like four steps And at every steps, I will be able to fill in some things that are important for my topic So the first thing is obviously a topic needs a name. So I'm gonna this one my first Kafka topic right, so That's the name and then it asks me how much partitions do I want? So again, I don't want to go too much into details, but on Kafka a topic can consist of one to Potentially very high number of partitions not unusual to have topics that have hundreds of partitions So basically a partition is a subset of a particular topic. So when you send Messages to a topic the topic the messages will be divided over the number of partitions So if you have only one there's only one partition, but if you have 15 partitions for instance your so your Your message will be divided over those partitions and then when you consume the message you will consume from one or more Partitions so the more partitions you have the more you can scale out especially from the consuming side because you can have several consumers that each consume only a Subset of the total number of partitions so you can scale out very easily your consumers now for this For this demo here I'm going to keep it to one partition typically for a production system What I generally do for my demos and stuff like that I have like I do a default of 15 partitions, but if you have like a very high load It's not unusual to have topics that have as I said several hundreds of of partitions I'm going to keep that keep it to one here. So one partition And then I need to make a couple of choices with regards to message retention So in its essence Kafka is a Distributed journal so that means a topic is if you want a journal of messages Which means that? Which is typical for a journal things stay there forever. So which has a number of advantages in the sense that If I create a topic I start producing messages for a topic if now I Connect a consumer The consumer can always start consuming from the beginning of the topic even if that topic has been there for days or weeks or months so because the messages are stored as as as a journal so they They they are retained. They are persisted potentially forever Now forever obviously that's the relative Everything you store in IT needs storage. So in practice you might want not to retain Things forever, but you might say, okay If things if my message is depending on your use case obviously, but a lot of use cases could Consumers have not consumed messages within a day. Those messages are not important anymore if you do something like a You want to show Stocks stock a stock teller so stock prices. Obviously, you're definitely you might not be interested in a stock price of the day before You interested in the latest one. So it Doesn't make a lot of sense to consume messages. They're a day old so you can On a topic by topic basis you can configure how long you want to read you you want to retain Messages in a topic you can do that by time or by size so the defaults here is retention time of a week and Unlimited cost size which fits the bill here because my Kafka cluster will disappear within two days anyway So I can just keep it here for a week when my Kafka cluster disappears storage disappears So I lose everything anyway. So let's keep it here for a week and Unlimited retention. That's nice as well. I think as part of the developer program You have 60 gigs of storage and then I'm definitely not gonna send 60 gigs of messages today to my topic. So I can keep it to a limit And then the last screen is more informative in the sense that you cannot at least not as part of the development preview program You cannot change those parameters But this has to do with replication. So Also very nice feature of Apache Kafka is this Apache Kafka has been designed with high availability in mind and one of the high availability aspect is that topics are Replicated so a Kafka instance in itself is typically a number of brokers So with open shift streams, the number of brokers is three. So every Kafka instance has three brokers So and so that means that every topic Can be replicated three times. So if and this is the default here So every topic has three replicas. So that means that every message In every topic will have three replicas One replica on every broker node Which means that even if I lose two brokers if something really bad is happening and when On this hosted service and two brokers go go go down My Kafka instance will will still work fine And I won't have any loss of data because I have three replicas so I can suffer a loss of two Kafka brokers And then the minimum in sync is important for producers So if I produce a message from the moment that the Kafka broker Has replicated the that message to at least one other replica it will acknowledge The reception to the producer and then in the background replicated to the third to the third uh node So this is what those numbers say. So every topic has replica number of three and an in sync replica of two That's for acknowledgement to the producers, but I cannot change those. This is more informative So if I click finish You will see I have my first Kafka topic here one partition Seven days retention unlimited retention size Just as I configured And then I can create other topics. So later on we're gonna create another topic But for now we're gonna stick with this one if I go back to my quick start. I basically did All this So I have my Topic ready. So if I click next is a new Kafka topic listed in the topics table. Yes, it is I'm ready to continue and this is the end of the first quick start Where we basically created the Kafka instance created the service account and created the first topic So now we can start doing something with that Kafka instance Which leads us to the second quick start And you find the link so we ordered the quick start in such a way that at the end of each quick start You have immediate link to the next one in the logical sequence of those quick start So the next quick start is about using Kafka cut to connect to or Kafka instance and produce and consume some messages So if I click on this one This is the next so it shows all green because I went to the the quick start myself To rehearse for today yesterday. So that's why it shows that I've already done all the steps But I can do them again. So the first thing is Using Kafka cuts now. So what is Kafka cut Kafka cut is a command command line utility Which is not it's not a red hat utility. It's a Apache Kafka community utility with which is very popular So it's a command line Which is very simple to use and which allows you to test out and So if you have provisioned somewhere a Kafka instance be it hosted like the one that you're doing now It can be on premise. It can be a local docker container However, if you want to verify that your your your Kafka instance is working as expected Kafka cut is a very useful utility. You can easily connect to a Kafka instance You can You can list the topics that you have you can send messages to a topic Assume from a topic So very useful tool in when you just get started with the Kafka instance to verify that Everything is working as expected before you start doing more serious things You can at least make sure that you're good to go So that's what we gonna do now. Normally you install Kafka cuts On your local machine And so it's a command line tool and then you start working with it now For this workshop that would probably be a little bit difficult because Uh If things go wrong, it would be a bit hard to debug what's wrong On your machine. So to make things a little bit easier for you We're gonna actually deploy a container on the dev sandbox That has Kafka cut Installed and then we're gonna use the terminal of that of the pop that we're gonna deploy to actually interact with Kafka cut so to do that we need to Uh, I need to go to the developer here. Okay, my developer perspective and I'm gonna deploy a Uh, a image so the rest if you're on the topology view You see normally this view you have like this card container image So if I click on this This is to deploy a pot from an existing image. So In the quick start, you see this thing here quay.io rose rose tools This is the image that we're gonna use it has Kafka cut. It has the rose cli So when Evan takes over from me, he's gonna use the same deployment Uh So if I copy this and I put this here He's gonna validate that that image actually exists so And then I can keep the same runtime icon. I'm gonna do that as a deployment So a regular open shift or kubernetes deployment And I don't need a route because I'm around this to access an application from outside of the cluster But I'm just gonna use it as a terminal if you want. So I don't need a route So I can just create the image here Oh I did that yesterday. So I forgot to Delete my image stream. So I'm gonna do quickly something Which you normally should not have to do and that is Delete my image streams because otherwise this Okay, I should be good to go now. I think I will have to do that again. Yes okay Good open shift. I don't need a route create That's better. So now you see here that this uh This container is being downloaded From quay. It's being deployed and when that blue circle becomes dark blue The application is running And now I can go into the application Actually into that pot into the terminal and start playing with with kaka. So to do so I click here in the middle This will open a Detail screen for my deployment On the resources tab which opens by default. You can see that I have one pot for this deployment If I click on that pot on that link for that pot, it will lead me to the pot detail screen The one that I'm interested in of all those steps for now is the terminal So this will open a terminal Inside that pot. Okay. So now is as if I have Kafka cut installed locally But and I would do it through a local terminal now I have a terminal directly in the pot that has Kafka cut and to verify that I can do a simple command To make sure that anything works as expected if I do Kafka cut dash V you will show me that indeed I have Kafka cut installed version 1.6.0, which is the version that I expect Okay, so I I'm ready to start using Kafka cut to connect to my Kafka instance Okay, so to now I want to use Kafka cut to connect to My host of Kafka instance. So for that obviously Kafka cut needs to know where is my Kafka running so it needs that boost rep URL that I copied before And it needs my service account client ID and client secret to act as username and password to actually be able to Allow to allow me to do something with my Kafka cluster and to be able to connect So to make my life easier. I'm going to set a couple of Environment variables in my terminal so that I can refer to those So the first one is the sorry The first one is the bootstrap server And I'm gonna copy it from the text editor that I have open on mother screen Which is the value Paste so that was my bootstrap server Okay, I'm gonna do so Kafka cat has the notion of user and password So which translates To the client ID and the client secrets So the user is the client ID. So that's the thing from the service account that starts with Service account Well, actually a bit abbreviated so The user the username is SRVC account and then this generating string and then We have the password Which is the secret Password is Which is this one There we go So now if I do Kafka cat command, I can very easily refer to these environment variables and Reuses, okay, so I've done that I can move on So the first thing that we're gonna do is produce some messages and then we're gonna consume them So basically with Kafka cat you can connect to a Kafka instance in production mode and just type some text in the at the command line and every line of text becomes a Message that is sent to a particular topic So in the quick start you have this command so if so which is Kafka cat this minus t is which topic you want to connect to which is my first Kafka topic topic that I created before The dash B is for the bootstrap server. So I'm reusing my environment variable As a security protocol and that's how the hosted Kafka So the open shift streams is being set up. It uses sazzle ssl. So the connection is encrypted It's over ssl and it uses plain. So that means a username and a password uh, so the user is Client ID the password is the client secret I could have used or open shift streams also supports Oout beer. So that's an an uh, Oout Authentication flow, but Kafka cat doesn't support that at the moment. So I cannot use this one So I'm gonna stick to the plane Okay, I copy that commands paste So at the the dash P at the end means producer mode. So that means I'll be able to send some messages to my Topic so I do enter and then you just see he's My prompt is waiting for some input. So if I do something like my first Message and I do enter this will send a message to my topic If I don't see something an error message popping up I can be fairly confident that This one's okay. And my topic now has a first message that's That's a second message And a third one Message Then I could go on and on and on but that would become boring, right? So after two messages I'm satisfied. So I can do control C and enter to leave producer mode so now I have or I expect to have Three messages on my Kafka topic on my sole partition because remember I only have one partition So those three messages ended up on my sole partition. So I should be able to consume them now Uh, and that's the next step in the quick start. So Which basically I'm gonna consume the messages Which basically uses more or less. So Kafka got in consumer mode So the only difference only difference with the previous command is that in the end It says minus c instead of minus p. So the c stands for consumer So now the topic is the same The booster app and the security setup is the same. So I copy this whole command Paste it into my terminal paste I do enter And there you say he consumed my three messages Okay, first message second message third message and then you get an informative Message say I reached the end of the topic if now I would have another window with Kafka cut open I would continue to produce I could continue consume here as well So I would see almost in real time because that's the beauty of Kafka is That uh, Kafka is uh, is very very fast. So almost in real time I would see I would consume messages that I that I produced But I only have one window open. So, uh I can close my consumer here. I reach I uh I consumed all the messages that are are on that are available on the topic. So Let's do control c and that and I consume So yes, I could consume the messages and my consumer displayed my three messages And with that I'm at the end of Uh, the second quick start and this is the signal for Evan to take over So I'm gonna Uh, stop sharing my screen. I see that Evan is waiting so Uh, stop sharing. Yes, and Up to you Evan Yeah, let me uh get my sharing working here All right, it takes the main screen I guess So let me move stuff around You should be able to choose no Uh, yeah, it didn't give me an option. That was weird. Usually stuff like gives you the option. Yeah, I didn't get an option Let me let me try it again. Let me see if I can Change it now No, oh wait notes. Oh wow. It uses okay. I got it It uses the screen that I have active at that time There we go Now I have much more room. So that looks okay Yeah, there we go All right, so I'm going to take off like Bernard said where he left off. Um, so Bernard has shown us Some really cool basics about how to get started with the service Create topics and giving it also a really good overview of Kafka in general, which was fantastic. So I'm After been following along with Bernard. So I already as you can see have my Tools image running here and you can see I have my Kafka instance created And I also have a service account created. So I'm ready to dive in now to a new quick start. So if I go over here to the Ad menu here I actually have it open But just to show one more time if you want to follow along with this quick start where we're going to show How to deploy an application head here to quick starts in your open shift environment View all quick starts and the one I'm going to show right now is this one So connecting to red hat open shift streams Are connecting red hat open shift streams for apache Kafka to open shift So the first thing in this guide is getting the tools started. Thankfully Bernard has already showed us how to do that So we don't need to do it again. So I'll just go with the next option here and after that It tells us we need to go over to our topology view and Go to the pod because we're going to be executing some commands with those tools so I'm already logged in and I can verify that by just Doing a rose Kafka list. So I should be able to see my Kafka instances So I'm good to go there and also I need to make sure I'm logged into the Open shift command line So I think Bernard might have already showed this as well But you can get the login token up here in the top right By clicking the copy login command and once you do you just paste it in here again And once you do you should be able to view your projects And you can see I can do my projects. So I'm ready to go So now to actually start using our Kafka instance that we created for example example my one over here is called workshop To use that with a corkis or a node or a python or wherever your favorite runtime application is We need to connect it with our open shift cluster here. So to do that We can use the row ass cluster connect command and it's explained over here on the right. It's just a single line plan And what that's going to do Is it's going to ask us which All of our Kafka instances we want to connect so In these workshop and development preview accounts, you only have one instance. So It's pretty easy to make the choice here. But You know in the future when services ga and when we up the limits people will have more instances and they can choose specific instances that they wish to link into their projects on open shift or You know kubernetes So i'll go ahead follow the prompts And once again, this asked me for a offline token. So that's the token that I can get from cloud.redhat.com So I'll copy URL here and just open it in my browser And I should get a token which I do I'll paste it in here And what that does is it then creates um A kafka connection uh customer source in my open shift project So haven't see down here Evan do you hear me? Sorry. Sorry to interrupt you. It seems your your your font is a bit small for people to follow In the term that for me, that's a lot better. So, uh Okay, so if it's still too small just let me know but hopefully this is better um So as you can see I ran the cluster connect command And what that did was it created what's known as a kafka connection, which is a customer source in our cluster So we can then do an oc get On the kafka connection to verify it was created even though The cli is telling us it was so if we check here you can now see a kafka connection has been created And it's the name it's named after the kafka instance you connected and if you describe it So for example, if we do an oc describe kafka Connection and the name of the connection then so workshop It prints out lots of interesting information that's useful for our applications, right? So if we scroll up here We can see It tells us the sassel mechanism to use when our applications are connecting to the instance It also tells us things like the bootstrap server host And there's also some credentials here that you can see and they're in a secret So that's the cloud services service account secret. So for example, if we do an oc get secrets Here You can see that a few seconds ago that new secret was created Um, and if we actually get the secret itself We should be able to see that it contains Two properties if we describe it and you can see that's the client id in secret So bernard showed you how to create that using the ui earlier Um, and those are used by your application to connect to kafka instance So i'm happy that that's working. So i'll go on to the next step here and the next step is Inspecting it, which I just showed you and I was quite happy that that worked well. So That's it. We've connected our instance now to open shift And our applications will now be able to Use what's known as a service binding to read this information in And connect to our kafka instance and produce and consume messages. So that's what I'm going to show you in a moment now So I'll go on to the next Exercise here and show you how to uh find a quarkus application To the kafka instance that we created earlier And I'm just heading straight into the next tutorial here as you can see on the right And what it's going to do is it's asking me to deploy a pre-built quarkus application So you can see here there is a pre-built container image available on key.io So similar to the tools image we deployed earlier. I'll just go ahead and deploy this quarkus application So to do that I need to go over to the add menu on the left here And I'm going to choose to deploy from a pre-built container image And I can just paste in the image url here and it should validate which it did And it is a quarkus application. So I'll make sure it has a nice quarkus icon to identify it and then We can leave everything here at the defaults um Leave it as a deployment and we do want to expose a route to the application So we can access it using our web browser in a moment So instructions tell us to leave all that stuff at the defaults and just go ahead and click the create button And when we do that open shift will start to spin up a pod based on that container image So if I click on this new pod here, the quarkus one, you can see it spun up within a few seconds And it also tells me here I should open the prices endpoint. So if I click on I went a bit quick there, but I clicked on this open url button And that will open the endpoint for this quarkus application in a new browser window And you can see it's up and running now And if I go to the prices endpoint What this application does is it basically randomly generates Kind of like stock prices or the price of a product, right? So you can see here That the last price is it's currently not defined because we haven't actually Yet bound this application to that Kafka connection information There's one more step we need to do To make sure this application can connect Kafka and read in prices from our topic So to do that again, it's actually really nice and easy thanks to the tooling created by the product team here so Let's go on to the next step that explains how to do that. Essentially There's two things we need to do. I haven't yet created a prices topic And I also need to bind my application. So to create that topic. I can go here to the Kafka streams ui Or sorry the open shift streams ui and select my Kafka instance My internet connection is having trouble. I think I'll refresh All right, there we go So I'll pick up my instance here and it'll give me the option to create a topic and I'll just go and create a topic named prices Since that's what the guide or this application expects I'll go ahead and use the defaults Just a single partition again and go with the default retention Time and size and again We apply those sensible replication and in sync replica values by default. So I'm perfectly happy with that So now I have my topic created I just need to bind my application now My quarkus application to the Kafka connection for it to be able to produce and consume Those prices so we can display them in the ui in this application So I'm happy now that I have that topic. I'll go ahead and click next And I also have the cli tools already configured from the previous steps. So I can just Get that section and scroll down here to the The instructions You can see here in the instructions Basically what I need to do is go to my quarkus tools or my my rose tools pod open it up And what we're going to do here is we're going to use that rose cluster command again But this time we're going to use the sub command named bind And what the bind command does is it will Ask us, uh, which particular instance, which is our Kafka instance that we want to bind To which particular, uh, deployment running in our open ship project So naturally I want to I want to bind the workshop instance I created to the Quarkus Kafka quick start application. I just deployed so I'll follow the prompts to do that And that creates a service binding. So if we now do an oc get service binding You can see we have this new service binding that was just created And that that does some some work for us automatically. So if I was quick, I should have came back to this topology view But we're this quarkus application just restarted. Um, I just you didn't see it because I was inside that other pod. But if we go to, uh The logs for this application Can we scroll up? You can see It restarted at 15 19 20, which is just a few moments ago And if we scroll down into the logs here You can see it's successfully running and it says the producer is connected to a cluster, which is the Kafka cluster So now if we go and refresh the UI After a few seconds, there's a timer in this application where it generates data it generates numbers And you can see there the first number came through right so that number was produced to Kafka And the application also is a consumer and it reads those numbers back And prints them here to the UI in real time. And if we go back here to the application logs, you can see that as soon as I refresh the application of my browser It started to produce those prices and you can see it has things like the Offsets and the partition information being printed here. So it's very straightforward using the OpenShift Streams UI to create Kafka instances and create your topics and your partitions And then you can use our CLI tooling either directly in OpenShift as you can see What's doing here are on your local machine to create the bindings and connections between your OpenShift projects and your Kafka instance running on cloud.redhat.com And That more or less concludes my section of the guide Bernhard has introduced you really nicely to the product And I've given you a quick introduction to how to Bind and connect your Kafka instances to your applications here in OpenShift or Kubernetes I think with that we're ready to wrap up the At least ready to wrap up this piece of the event Sure Uh, give me one second I'll stop here in my screen that way. Yes. Yeah, we can do that So I hope you guys enjoy the workshop So basically what's next We want you can keep trying the service The Kafka instances you can require a Kafka instance at no cost and zero commitments After you finish your workshop is exactly the same process that you run through today Um, so once this one this one is going to last for about 48 hours depending when you created it Then you can recreate another one you can keep using the service as well as the sandbox You can go back and request another sandbox for your OpenShift and keep trying different use cases A couple of things I didn't put it here. This is my Fall but if you give me one second, I'll share something with you So we're running two deaf nation talks with our friend, uh, Edson In the coming days, so give me one second here. So in july 8th, we're running a deaf nation talk to demonstrate Concepts for the concept first change data capture. So how you can use divisium and Kafka connect And also using our managed Kafka service that's happening in july 8th and in july 22nd We are doing a use case with k native on OpenShift and you're also going to see how you can process events source from Apache Kafka when you have a serverless application So all the information is going to be sent out to you by email. So but Keep checking them a newsletter and our notes that keep going to the website And you're going to see more things coming along your way on Kafka The service is going to be launched by the end of the year Finally, please remember to If you can please give us feedback sign up Here's the link give us Any ideas any feedbacks that you have on how we can keep shaping the future of red hat products? We are really looking to know what you're thinking what you enjoy um, let us know And we thank you for joining us today. We hope you enjoy the workshop if you have questions or anything Please reach out to us um, and we will be happy um To keep sharing content information with you. I don't know if there are any questions left on the chat Um, but if not, thank you so much all for coming Uh, there was one jennifer. Do you have links for those definition sessions? Uh We will be sharing them You will be getting an email after this workshop in a few minutes today tomorrow um And the links for registering to the that those talks are going to be shared there But I think edson has also some feedback you wanted to say edson Yeah, I think the registration links for the tech talks are not ready yet But if you you can always check, uh, this particular url that i'm sharing on the chat I mean the end dot dev slash upcoming you always have the the newest and greatest things That we're going to present at the nation And if you register for the developer sandbox and you accepted our newsletter You always have notifications on when we have upcoming tech talks all right, uh Any more questions? Oh, I can see here dj maddie saying a very important bookmark For dev nation. Yes. Thank you dj maddie and Okay, let's call If you have any more questions you have five seconds to give it a try or else we'll just Wish everybody a great rest of the day evening And well once again, thank you very much for coming. Thank you, jennifer. Thank you bernard Thank you evan for this amazing dev nation workshop I hope you enjoyed everything that you've seen today and don't forget to try the developer sandbox by yourself See you soon