 So, hey, everyone. Today we're going to be talking about Apache Kafka, Kafka Streams, and hopefully have a bit of fun with some live demo and also with you yourself getting to run through building some applications. So, to get started, I'll just dive into some quick slides that will introduce the workshop, what we're using today, and get us set up for success. So, let me share my screen. So, you guys should be able to see my screen right now. So, I will go full screen with the slide deck here. Actually, Kristen Bernard, let me know by the way in chat if you cannot see it, but otherwise I'll go ahead. All right. So, like I said, today's workshop is in our Kafka workshop series. We've been doing a few workshops already. Bernard, who's in the chat right now, who's here to help, if you guys have any queries. He did some of the previous labs where we introduced Kafka in general. And we also talked a little bit about using Kafka and service registry or schema registries. And today, we're gonna talk about Kafka Streams. So, the first thing we wanna do is dive in with a quick overview of Kafka and Kafka Streams. So, Kafka ecosystem, from Red Hat's perspective, we do offer native integration with Kubernetes, or OpenShift, which is our enterprise Kubernetes. We have tools for data science, AIML. We also provide connectors via cannel. We have, like I said, we have schema and service registry that Bernard has demoed before and we also have API management. So, there's lots in this space from a Red Hat perspective. Today specifically, I know there's a lot to look at here on this slide, but if you look just right here, we're focusing on this part of Red Hat's portfolio, which is basically Apache Kafka. So, we provide a managed version of Apache Kafka called OpenShift Streams for Apache Kafka, and we'll be using that in this workshop today. And the idea of this is it's part of a suite of managed services that Red Hat offers. So, we offer a managed enterprise Kubernetes distribution that is OpenShift. So, you can run it on the major cloud vendors. And then we offer services on top of that, which are things like Apache Kafka service. And OpenShift Streams for Apache Kafka, it's a fully hosted and managed Kafka service. So, as you can imagine, it's perfect for building stream-based applications. And it's designed for IT dev teams that want to start building streaming applications and real-time applications, but maybe don't want to manage Kafka themselves. So, for me personally, I don't know about anyone else here, but I'm more of a developer audience. I don't relish in managing virtual machines and networking and infrastructure. So, services like these, to me at least, are very appealing. And of course, it's got a high uptime SLA of 99.95. And it just makes the experience of using Kafka and building applications using Kafka easier. And I keep saying OpenShift Streams for Apache Kafka. There's lots of acronyms we use sometimes. I know I'm the tech world of Red Hat, but whenever I say OpenShift Streams or OpenShift Streams for Apache Kafka, that is the product name I'm talking about, which is the managed Kafka product. Sometimes we actually say, manage Kafka or Kafka or Rosak, but throughout this lab today, we're gonna be using the managed and hosted Kafka, which is OpenShift Streams. So, now that we know a little bit, or we have a little bit of background about the Red Hat OpenShift Streams for Apache Kafka product, let's get into the meat of the lab, which is using Kafka Streams. Now if you're not familiar, Kafka Streams is a client library for building applications and microservices where the input and output data are stored in Kafka clusters. And it's designed to be used in Java or Scala applications as a client-side library. And you get the benefits of Kafka server-side technology. What does that really mean? It basically means it's a library that you can embed in a Java or Scala application to process your data that's stored in Kafka clusters. So you source the data from your Kafka cluster and you can write it back out to a Kafka cluster when you've performed perhaps transformations on the data or other computation or whatever workflow you need to perform on your data. So it's a really nice library that makes it easy to design these high throughput, real-time processing architectures in conjunction with your Kafka clusters. And to visualize it, you can see here at the top, we have input Kafka topics or streams of data. And then in the center, we have consumers. So these consumers are Java applications that are reading in data from your Kafka cluster. And then that data is passed around by Kafka Streams within your application to process it. And then it is output to new streams or topics. So effectively a Kafka Streams application is generally acting as both a consumer and a producer. And what's interesting about Kafka Streams as well is it includes a state store. So it can store state in the application itself. So for example, that could be a key value store. So if you're familiar with Kafka, data that you write to Kafka typically will have a key and a value associated with it. And when you process data using Kafka Streams, you can create these state stores in the application that you can use to query the data that you're processing in real time. So not only are you flushing it out to Kafka as a system of record, but you can also do these interactive queries with the state you're building as well in your application. And what's interesting to note about this diagram as well is you can see it's composed of a number of nodes. So the square in each of these consumers and producers is a bunch of nodes. And the reason for that is that when we talk about Kafka Streams and the architecture of a Kafka Streams application, we generally refer to it as a topology. And that's because it's composed of these nodes that process or implement your processing logic. So for example, you could have a node that filters data. So only certain records get passed along to the next node. And then the next node may perform a mapping operation which transforms the incoming record in some way. And then it writes it out to a new stream. And that's what we're gonna do today actually. So today we have our managed Kafka cluster. And you can see you have your original data here that's stored in the Kafka topic. That data will be read in. And there's a node here in the Kafka Streams application that's basically just sourcing data from a Kafka topic. And then it passes it along to the first node in our stream processing architecture which is a filter operation. And based on the criteria of that filter, we decided whether we pass the message along for further processing by a mapping function which will transform it in some way. Then it gets sent to a sync processor which ultimately writes it out to another topic. In today's specific example, we're gonna keep it pretty high level. So we have a stream of US dollar prices. These could be a real-time stock ticker or something. It could be a traded commodity for all we care. But really what we're doing is we're keeping track of a stream of prices. And we're using Kafka Streams then to perform some logic on them. So we're going to filter out certain prices and then map them. And that map is going to convert them to another currency. So the input topic is in US dollars and the output will be in euros. So a fairly simple scenario, but it's a very good way to illustrate how Kafka Streams works and get hands on using it. So the end result of what we're gonna build today is we'll have a Java, a Quarkus application that has some HTTP endpoints and we'll have a web application that communicates with those endpoints. And we'll also hook that same application to a OpenShift Streams for Apache Kafka cluster that will host the US dollars topic. And we will then create a Kafka Streams application that consumes the US dollar entries and processes them in real-time and converts them into euro amounts. And then they will be consumed back into the original application and streamed to the UI. So very simple use case, but it's very effective at getting you a sense of how Kafka Streams works and how quick it is actually. So we'll see as well that there's basically no delay. So this is very much a real-time application. You'll see that when the US dollar prices are generated into the corresponding topic, they're immediately picked up by Kafka Streams and converted into euros in a couple of milliseconds. It's instantaneous. So, without further ado, let's get on to the actual workshop itself. So today, what you're gonna do is you're going to create and use our managed Kafka service. So that's gonna provide you with a Kafka cluster. You'll create some topics and manage the access controls for those topics. Then you'll deploy the Quarkus-based Java applications that I talked about previously. So that's the one that exposes the HTTP API. And then you'll deploy another application that uses the Kafka Streams domain specific language to process the records in your Kafka cluster. So what you're gonna need is a Red Hat account. The workshop will guide you through creating one if you don't have one already. You'll then need to go to console.redhat.com and create a Kafka instance. And then we'll access the OpenShift developer sandbox and hosted online IDE. And actually deploy these applications and modify them in real time. So there's some resources and I know Bernard is gonna be able to paste these in the chat so you guys don't need to, try and copy the link from the slides here. So what you're gonna want to do specifically or most importantly is go to the first link here. So number one here is the actual workshop content. So make sure you get to that first link and follow the instructions. I'll be going through them on screen as well to make sure you guys can, even if you can't follow along yourself or you've trouble following along, myself and Bernard are here to answer questions. Secondly, we have a link to our Slack channel. So be sure to click on that. If you use Slack, you can jump in and ask us questions there, both during and after the session. There'll be people in that channel. So it gives you a line to some Red Hatters. And then finally, the third link here is a workshop guide. You would have received this I think already in emails but it's again a quick overview of what we're doing today and it links to that first link again. So yeah, the most important thing here is to get that first link and follow along with the instructions there. And what's next is I'm going to actually follow the lab myself on screen so you guys can see how it's done. So to get started, like I said, we need to go to this link here. It was the link to the Red Hat Scholars page. So I think Bernard has probably put it in the chat if I go through the comments. Yep, there we go. So you guys should all have access to that. And once you get access, it'll bring you here which is you can see there's four high level steps to this workshop. The first page here is just a quick introduction. It kind of covers what I've already mentioned. So if you want, you can skip that. Not super important. And when you go to the next page here, it's going to guide you through provisioning a Kafka cluster using our managed service. So to do that, you need to go to console.redhat.com and log in. So I can go ahead and show you that now. I'll log out because I'm already logged in. So if you go to console.redhat.com you'll be presented with a login screen just like you will with any typical online service. And if you don't have an account, there's a nice big register for account link down there at the bottom. So go ahead and click that link if you don't have an account. If you're like me and you already have an account, you can go ahead and log in. I need my password. So let me get my password. There we go. All right. And once you're logged in, you'll be brought to the homepage of the Red Hat Hybrid Cloud Console. There's a lot of stuff here that you don't really need to worry about for this lab. But if you're interested in exploring afterwards, you certainly should. You can get access to OpenShift, Ansible Automation and other services we offer. For today though, you can just click on the apps and services link up here at the top. And under the application services section, you can find streams for Apache Kafka. And when you go to streams for Apache Kafka, this is where you can create a Kafka cluster or Kafka instance as we say here in the UI. Creating a Kafka instance is very straightforward. There's a big blue button. You can't miss it. So you can go ahead and click that button. And basically, you just fill in this form. You give your Kafka cluster a name. You then choose a cloud provider since we're all using evaluation accounts here since we haven't actually subscribed to the service. We're only provided with AWS and you're provided with two regions here. I think US East and North Virginia will be the best option for most people for capacity. So select that. But if you do have trouble, maybe try selecting a different region and we can only choose multi-AC as well. And of course, over here on the right, you can see some of the limits that apply to your Kafka cluster. So for example, you can only have 500 partitions, 500 connections and up to a terabyte of storage approximately. And I say only, that's obviously quite a lot. That's quite generous. So you won't have any of the trouble running evaluation applications on this cluster. It's quite generous. So once you follow that flow and submit the form, your Kafka instance will start to provision. And that can take, it usually takes, I find about three minutes. Since we're all here at the same time, maybe you could expect it to take a little longer, but three to five minutes, I'd say it should take. And while that process is ongoing, you can go back to the instructions. And I think we've actually said this in the instructions. Once your Kafka instance is ready, you can move on to the next step, which is managing access and role-based access controls. So all our Kafka clusters by default are secure. The topics connection has to be secured. So to actually connect to a cluster, you need to use a SASL and you need to connect over an SSL connection. When you use SASL, of course, you need to provide some sort of authentication, right? So you need to find like a username or password. So in the case of our service here, that is done using service accounts. So what you need to do once your cluster is being created, is you need to head over back to the console.redhat.com UI and on the left here above the streams for Apache Kafka section, you can find service accounts. And the service accounts will provide you with a username and password, or as we call it, a client ID and a client secret that could be used to authenticate to your Apache Kafka cluster. So creating a service account is again, fairly straightforward. You just click the create service account button and you give it a description. So I'll call mine workshop. Let's go ahead and click create. And once you do that, you get a nice big dialogue here that explains that you connect to your Kafka instance using this client ID and secret. So make sure to copy these down because as the UI says here, that the client secret won't be displayed again. So if you lose the client secret, you have to rotate the secret, which means any applications you would have running would need to be updated with the new secret. So keep that safe and secure. So what I'm gonna do for the purpose of this lab is I'll copy the client ID, put it in a note here in my VS code and I'll copy the client secret and do the same thing. So once I've done that, I'll go ahead and acknowledge that I copied them and I can close the dialogue. At this point, you can come back to your Kafka instances screen and hopefully it's ready, but if it's not, just wait for another minute or two and you'll be ready to move on to the next step of the lab, which is to further manage access. So before I go into that too deep, I wanna show you guys one thing, which is when you're on this screen with your Kafka instances listed, if you click this three dots, or these three dots over here to get a dropdown menu, you can view the cluster details. So for example, you can see it's running in AWS in North Virginia, and you can even see the ID of the cluster and when it was created. But what's really important here is the connection information. So the connection information includes the Bootstrap server URL. So if you use Kafka before, you'll be familiar with this term. It's the URL you're going to provide to your applications to connect to the Kafka cluster. So this is basically the gateway to your Kafka cluster. Service accounts, I've already kind of covered this, but from this menu, you can get access to the create service account dialog as well. You won't need to do this if you've already created a service account. And then if we scroll down a little further, you can see that the authentication method listed is either saddle OAuth bearer, which is what we'll use in our applications today. And that also requires us to provide this token URL endpoint to our OAuth server. And you can also authenticate using saddle plane. So you can just use saddle plane if you like, but today we're going to stick with the OAuth bearer option in our applications. So you're gonna need to come back to this connection information screen in a moment to copy these variables and plug them into the applications. If you want, you can just while you're here copy them. So I'm gonna put the URL and the OAuth server URL here in my notes again, so I don't lose it. And I am happy now that I have that. And hopefully by the time you get through those steps, your Kafka instance is ready and you're ready to move on to the next step in the lab. And the next step will be setting permissions for your service account. So like I mentioned, the service we try to keep it as secure as possible. So by default, each service account is only able to connect to the Kafka instance and basically list the metadata for the Kafka instance. So it can list things like the topics and see the broker nodes, but it can't actually read or write or do any sort of write operation to the cluster. So we need to make sure that we have permission to actually perform writes to the cluster for the lab today. So to do that, the instructions here tell us that we need to go to our Kafka instance, click on its name, and then go to the access tab and use the manage access section to give our service account the ability to read and write from topics and also provide it with access for its consumer groups to read and write as well, or sorry, consume. So to do that, we just go back to the UI here in console.redhat.com, select the Kafka instance by clicking on its name. And then once we're in here, we can see all sorts of information such as the number of topics, partitions, consumer groups. Now I created this cluster just yesterday evening and I haven't done anything with it. So it's all zero right now because I'm not actually using it, but that will change throughout the lab. So I'll go ahead now to the access tab and you can see here, I can find the manage access button. So I'll go ahead and click that and I need to select my service account. So I get a list here, I can scroll down and here is my workshop service account that I created just a minute ago. Go ahead and select that, click next and I can use this assign permission section now to provide it with the permissions I need. So I'll click on add permission and this provides me with a new line in the permissions UI where I can select that I want it to be the resource type to be topic. I want the name of the topic to be a wildcard so I can enter the star here, which means it can read, it'll apply to all topics in the cluster. I'll select permission to be allow and all. So effectively I'm giving this service account access to read, write, update, whatever it needs to do from any topic in the cluster. I'll do the same thing again, except this time for consumer groups, I'll go with star again, allow, all. So basically we're setting up now that our service account will be able to basically consume and produce to whatever topics we need for this lab. Now naturally in a production scenario you probably aren't going to use a wildcard here, you're not gonna use star, you may use a specific naming convention, so you can use the starts with option here for example, or you can lock it down to specific topics if you need to. But for the lab let's just go at star to make it easy and avoid little mistakes and typos if we mistype a topic name. So I'll go ahead and click save and those permissions will be applied to the service account for this Catholic cluster. And now we have managed our access controls. Now I know I'm moving quickly, don't worry you don't need to keep up with my pace, you can take this at your own pace by the way. Bernard myself are here, Bernard especially since he's in the chat will answer your questions if you miss anything. But I just wanna make sure you guys can see the working application by the end. So that's why I'm going through it here. So the next step is to create some Kafka topics. And for this lab we need two topics, so you can see here in the instructions we specify that we need the US dollar prices topic and the Euro prices topic. And creating topics is just as easy or easier than managing permissions even. So to do that you can go back to the console.redhat.com URL here. And if you are on your Kafka instances page, you can again just click on the name of the instance. This will bring us back to our dashboard. And at the top here we can select the topics tab. Now you should see a message like this saying you've no topics yet, unless you're playing around with the service already. But most of you will probably see this message that you've no topics. So it'll present you with a nice blue button here to create a topic. So the most important thing right now for this lab is that you name it correctly. So make sure you put in US dollar prices, USD prices. And then go ahead and click next. You'll get asked for how many partitions you want to set or how many partitions this topic should have. For this lab it doesn't matter, we can just go one. But of course you're free to scale up if you're doing other workloads. But one is more than fine for this lab. And then in terms of message retention, we can configure that if we like as well. But we're just going to go with the default, which is the retention time is one week and the retention size is unlimited. So that means messages in the Kafka topics will be compacted after a week. And potentially we can grow to fill up the entire cluster with this unlimited option. Click next. And then finally we get this replicas page. This is non-configurable. So all of your data is replicated in the service by default. So that means there are three replicas of your data. And we have this minimum in sync replic account of two, which means that anytime you produce data into the Kafka cluster that's managed by the service, it needs to be written and confirmed to at least two replicas or two brokers to provide that high availability guarantee that you'd like from Apache Kafka. So I'll click finish. And then we have our US dollar prices topic. So we'll do the same steps again, except this time I will put in Euro prices or EUR prices. Click next. And I'll go with the same defaults for the settings here. So now I have two topics. I have the US dollar prices topic and the euros topic, which is what is necessary for my application. And then we get onto the fun part, which is actually deploying some Java or, yeah, some Java applications. Now, of course you can use Python or Node or Golang or whatever you prefer, but in this lab, we're gonna just stick to the Java applications we've developed specifically for the lab. So again, this section of the lab will give you an architecture overview that kind of talks through the exact architecture of the application and how it works. But what I wanna just show you real quickly now is the developer sandbox. So this section of the lab, we're going to use an OpenShift cluster. And on that OpenShift cluster, we have code ready workspaces, which is basically an online IDE, similar to VS code. So to access that, you go to developers.redhat.com, this link here, that's in the lab instructions. And from here, you can scroll down and you find this option to launch your developer sandbox. Now, if you don't already have, if you haven't already signed up for developer sandbox, this dialogue will be slightly different. You'll just have to fill in some quick details, such as your, I think you might have to fill in your name and your phone number because this service is providing you a lot of free computing power. So we just need to make sure that it's actually a human who's using it. So if you do get prompted to input a number, phone number, just put that in, you'll get a one-time code to confirm you're a human. And then you'll be left to access the service. So once you do that, you'll be brought over here to an OpenShift cluster that you can see on my screen. And all you need to do here is go up to the top right and choose code ready workspaces. Once you do that, it'll bring you to code ready workspaces and let me get back to the screen you'll see. So you'll see a screen that looks something like this. And what you'll do here is following the lab instructions, you'll be asked to input a GitHub URL into the code ready workspaces, which is basically, we're providing it with a template for our workspace that we've created for this lab. So this GitHub repository contains the sample applications we'll be using, and it also contains some configuration to say, hey, they're gonna be running using Java, so include some Maiden and other Java ecosystem tooling. So once you have that URL, you click Create and Open, and it starts to load up the workspace. Now that process of loading up the workspace can take two, three minutes because it pulls a bunch of container images and it spins up some pods on OpenShift. But once it completes, you'll be brought to your workspace. So I am after clicking on my workspace that's already running here to save us some time in the session. And once you do, you'll see it loads up a UI that looks very much like VS Code. So if you're familiar with VS Code, this will, you'll feel at home here. And following along again in the workshop instructions, we need to open up a terminal in this online IDE and actually run a Quarkus application or our Java application that will produce messages into our Kafka cluster. So to do that, you just go up to the top here and select the terminal option. And it asks you to open a terminal in a specific container. We're going to use the Quarkus Maiden container image. And then you get prompted to choose which of the two template applications to open as a context. So the context for the first one is the generator application. So this one provides the UI that you'll see today for the application. And it also generates the US dollar data into our Kafka cluster. So select the generator. And what you can do here then is you can see that we're running in a container here that has the price generator application. So we can change directory into that project. And now we have to set four variables. So four variables in our session or in our terminal here to allow this application to access or communicate with our Kafka cluster. Those variables, you can find them here in the source code. So if you open the generator app folder here on the left to open the source folder, go to the Java directory, or sorry, not the Java directory, the resources directory and open up the application.properties file. You can see here there's some configuration information for how our application will interact with our Kafka topics and our cluster, such as how it serializes and DC realizes incoming and outgoing data. But I just wanna show you the variables that we're using or referencing the four variables we need to connect to our Kafka cluster. So you can see in the application here, down near the, or sorry, in the application properties here, down near the bottom of the file, we have our Kafka configuration. And we have these dollar symbols that are followed by some curly braces, I think I would call them, that reference these variables. So the first one is bootstrap server. So as I mentioned earlier, you were gonna need that bootstrap server URL. So if we go down to the terminal at the bottom here and type export bootstrap server equals, and then we can take the bootstrap server URL from the streams for Apache Kafka UI over here, or like I have it in my notes as well. But if you need to find it again, you can just come back over here to the web-based UI, click on the dropdown, select connection, and you get your bootstrap server URL. So go back to your workspace and paste in the URL. The next variable we need is the client ID. So I'll say export client ID and put it equal to the client ID I got earlier and saved in my notes here. And then we need the client secret. So the same thing again, copy it from your notes and put it in here. And then finally, we need this OAuth token endpoint URI. So same process, export the name of the variable and then get the token endpoint URI from the console.redhat.com UI. And I keep all tabbing to go back to my own local VS code, but no, you need to go to your VS code in the code-ready workspaces. So there we go. I have those four variables set now and these will allow my Kafka, sorry, my Quarkus application to connect to the Kafka cluster I created on console.redhat.com. And if you're curious, you can of course open up the Java folder here and take a look at some of the code. For example, we have this price Java class here that just represents the US dollar price that we're gonna write to the Kafka topic. It's a simple object that contains three fields. It contains a price, currency, and a unique ID. And this will get serialized into JSON and written to the Kafka topic. And you can see that happening here in the generator being our class. So basically what we have here is we're, we have configured an outgoing channel. This is using micro profile reactive messaging. So we have channels and we've configured a channel called generated prices, which when records are written to it, they will be ultimately written to the US dollars prices topic in Kafka. So we have this nice little method here that every five seconds, once the UI is open, it will write a price to the channel. And that is a Kafka record that contains a key of the generated price. So the key, which is the unique ID of the price. And then we're going to write a string of JSON or a JSON string to the topic as the value. So fairly straightforward, in theory at least, I know there's quite a bit of code there, but in theory at least it's just writing strings, strings of data to a Kafka topic. So we can run this application by using the Maven Quarkus dev command. And assuming I didn't mess anything up here, what should happen is it will start to pull in some dependencies for Maven. And it should be pretty quick. And once it does pull in those dependencies, what will happen is the Quarkus application will start in development mode. And it will start listening with a HTTP server on port 8080. And it will generate a nice UI we can use to see the generated prices in real time. There we go. So now it has started listening on 8080 and the code ready workspaces has prompted me saying, hey, the application is listening on port 8080, you want to open it? And of course I do. So I click on the open in new tab here and I get a little loading spinner while it's waiting that first five seconds for the initial US dollar amount to be populated. Oh, and there it is. So you can see here, it's a fairly basic UI. It basically is a table that contains the timestamp of when the records were written into Kafka. And actually when they were received by the UI, this isn't a timestamp coming from Kafka, but we could do that. And you'd see it's listing the US dollar amount and the unique ID of payload. Now, you'll notice that the euros column is empty and that's because the euros column is going to be populated by our Kafka Streams application. So let's fix that and get that working. So again, the process to do that is pretty much the same as the process you used to run this first application. So you go up to the terminal option up here at the top and you choose to open a terminal in the container using the Quarkus Maven dev container and then you select the Streams Converter application as the context. So I'll go ahead and do that. And you can see down at the bottom here that you have the Streams Converter context open. If we look at what's in here, we can see it contains our Streams application. And let's take a quick look at the source code for this. And what you're gonna see is if I open it up and go to the resources folder and go to the application.properties, it looks very familiar. So if we scroll down to the bottom, you can see, or sorry, the middle section here, you can see it contains very similar properties to the generator app. So it's gonna connect to the Kafka cluster in the same way. It's going to need the four variables that we defined earlier. And it's also got some extra configurations here for Kafka Streams. So for example, we're telling it it's application ID, which is basically the consumer group. So if we spin up multiple instances of our Stream application, if we had multiple partitions, we could do that to distribute the workload of the processing. The consumer group would be US dollar Euro Converter. We also tell Kafka Streams what topics it needs to use or uses as part of its topology. And we've other configurations here, such as how to handle offsets and how often to commit offsets. So that's basically saying how often the Streams application is recording its current offset with the Kafka cluster. So when it resumes processing at a later point in time, it will resume from that same offset. And then down the bottom, we've some properties that are used specifically when we're running unit tests. So we don't need to worry about those for the lab. So I'm gonna do the exact same thing again. I'm gonna export the Bootstrap server, which I have here in my notes. So this is exactly what we did for the previous app. We're just doing it again. So I'll copy the service account, slide secret. And finally, I'm not gonna remember or I'm not gonna be able to type the OOF token endpoint URI variable name without making a mistake. So I'll copy and paste that one and I will paste the endpoint. There we go. So I have those variables set up now and I can run our Euro conversion application. So the same thing again, we run the quarkis dev command to start the application in dev mode. It will download some dependencies from Maven again and it will start in a moment. And this application, while it does listen on port, I think it's set up here. Yeah, port 8081, we don't actually need to open that. There's nothing notable going on at port 8081 on this application. It doesn't expose any kind of nice UI or anything. So I'll click no when I'm prompted to view the UI and I can go back here. Oh wow, it's already working. So you can see the Kafka Streams application started and it's already after processing all the prior payloads. So you remember this column was empty a minute ago. But since the Kafka Streams application started, it processed all of the US dollar values and found the Euro equivalent. And this front end received those values because of the way it's designed to read those values back and it filled in the necessary cell in the table. So it's instantaneous. You can see here that as soon as a new US dollar amount shows up, that the Euro value shows up immediately afterwards. And believe it or not, actually, I know it seems like it's not immediate, like there's a bit of a delay, but that delay is actually coded into the UI. So you can see that they're not arriving at the same time. If I didn't put that delay, it would look like they arrived at the exact same time. So believe it or not, it's milliseconds we're talking here. It's incredibly fast. So let me stop the, oh, weird, my terminal isn't responding. So what I wanted to do is stop the Euro application. Oh, I did. So you can see now I've stopped the Euro application. So Euro amount aren't showing. And if I restart it, it should start populating the column again. I don't know why I can't see anything beyond this. Hang on one second. I'm just gonna close this terminal and open a terminal again for the converter application. Unfortunately, that means I have to set the variables again. I don't know why that happened, why the terminal stopped responding to my inputs. Let's hope this goes smoothly. So I'll reset those variables and I'll show you how the application can resume from where it left off. So I need the client secret. Something always goes wrong in these demos or labs, doesn't it? Oh, I typed the variable name wrong. Twice. Type ID, client secret. Again, my secret here. And then the oh, at some point. Because I want you guys to see the logs as well for the Kafka application. Because it wasn't printing the logs either. Whatever happened in this terminal, it stopped showing any output from the application, which would have been helpful to see. Oh, I'm in the wrong directory. Let's try that again. So the application should start, should start up like it did previously and the euros column should populate, assuming there's not some orphan process running here. I don't think there should be because it should be a new container. So fingers crossed, this goes smoothly. Awesome, there we go. So you can see the logs now. So it started up and it's gonna start processing. And you can see here, it says converting the US dollar price into a Euro price. And this will log a new log every five seconds or so. And if we go back to the UI, there you go. You can see them. So if I quickly stop the application here using control C, you'll see the UI won't have any Euro value here. And if I go back and start it up again, the blank columns will be populated with the Euro values in hopefully a few seconds. I know the Kafka Streams application does take a moment to start up and configure itself and then get its offsets and all those other startup tasks it does. I wanna switch back, but I don't wanna miss. I don't wanna miss seeing them all populate. So maybe I just need to be patient. But let's see. There we go. Yep, you saw them all populate right there. So it's very quick. So yeah, that's basically the end result of this lab. So you're building a application, deploying an application that generates dollar prices and then you're using another application to consume those and convert them to Euros. Well, there's one more part of this lab. So then at the bottom, we're doing some experiments. I've already shown you one of the experiments which was to stop and restart the Kafka Streams app. So you saw how it could continue from where it previously left off working and it resumes and there's no loss of data and it resumes processing the record it stopped with previously. Now, something else we can do is modify the Kafka Streams topology. So we're gonna write some very, very small amount of code. So if we go to our editor here and what I'm gonna do is I'll stop the Kafka Streams application using control C and I will go over to the source directory here, open the Java folder. And if you go to the model folder, you can see we have the price object again. And this is where something like a service registry would come in because we have some duplicate code here and in the generator application for this data schema, whereas if we were using Red Hat Service Registry, we could automate generating these schemas which is quite nice. And also version them and make sure they're backwards compatible and all sorts of other neat features. But for the sake of this lab, we just wanna focus on one thing, which is Kafka Streams. So if we open up the Streams topology producer file, you can see that the code required to process the US dollar prices and convert them to euros is actually pretty short, right? Like there's lots of comments in here and I've got line breaks to make it nice and readable, but there's not a lot in this file, right? So we have this object mapper 30, so 30 stands for serializer, deserializer and we pass it error class. And basically that does some, my Java vernacular won't be as good as some people here but it does some wiring basically that allows it to serialize and deserializer data from to or from that price class, which is pretty amazing. It's really nice and easy to do. You can of course use, I guess, Jackson annotations if you need to do more complex processing, but for our sake, it's all simple top level properties. So it works fine. And then we have this builder.stream function here that tells, and we tell it to read the prices topic and when it consumes in the records, the key is a string and that the value is error price class. And we pass the 30 here, so it knows how to take the JSON and instantiate it as a price object. And then we map over the values and we perform our conversion. But what we're gonna do before we map the values now, so we're gonna make small modification to the application, we're going to add a dot filter step here. And you can see the filter takes a predicate function and that function basically decides what logic we perform to decide whether the record gets passed onto the next node in topology, which is the math values. So this filter is able to basically drop records from being processed any further. Now, if I go back to the example, or sorry, to the workshop guide here, you can see the filter we're applying is basically we're taking the value, the US dollar value and checking if it's greater than $4 and if it is, then it gets processed by the math values. And if it's less than $4, it basically doesn't get processed. So I'll go ahead, make that change and restart the application. And basically what's gonna happen now is once this application restarts, you will see that all these columns here will hopefully get repopulated. Like I said, it should resume from where it left off. But all of the columns that are containing a dollar value less than $4 won't have a corresponding Euro value. Okay, that was really fast, awesome. So you can see here that all the columns that were less than $4 did not get processed by our Streams application. And it's that easy to do conditional logic in a Streams application. And there's a lot more you can do here. So there's tons of different functional operators you can apply here to process your records. And you can even do things like I mentioned earlier, like that right to an internal state store. So you can do aggregations, which would aggregate data and store it in the state store local to this application and you could query it via REST API. So the Kafka Streams library is kind of incredible what you can do with this. And I think that kind of wraps up the workshop piece or at least me demoing the workshop piece. I'm sure you guys are still working on it. But in terms of what's next, well, we obviously want you to try this yourself. So if you were just watching, be sure to head over to red.ht for slash try Kafka and give the service a try. And by the way, if you were following along or you plan to follow along, take note of this link as well, which is it's basically allows you to provide feedback on our UX design. So user experience of the product. So if you found there was rough edges or if you found that certain parts of the UI were a little bit misleading or confusing or could just be improved, we're happy to hear it from you. Oh, sorry about that. You guys might have heard my dog. But yeah, you're happy. If you have any feedback, we're happy to hear about it. So go to that link and you can provide feedback to our UX team. And with that, I will stop sharing my screen unless people need to see something. And I can engage with you guys in the chat here. I know that was a lot to take in 15 minutes, but I hope it was enjoyable and it was a nice introduction to streams. So I'm taking a look at the chat here. So it's like Bernard has been helping everyone out. But yeah, if anyone has questions, feel free to put them in the chat right now. We can hang out here for another few minutes. If you'd like me to explain anything that I showed on screen or showed again, feel free to throw that in the chat or ask any questions. I think is, oh yeah, one other thing I should mention actually is we've done a few of these sessions. I mentioned that earlier before we started Bernard himself has delivered a lab on the service in general. So getting started with the service and also he delivered a lab on the service registry component. So I know I touched on service registry when I was showing you that price object in the code base and how we could use a service registry or a schema registry to store and version our data formats. So be sure to check out that lab. I think it's available on demand or if you go to the DevNation channel, you'll be able to find it. There's a full recording and it's well worth checking out if you're interested in diving deep on management. I see a question here from Omar asking how do we compare with Strimsy and I think Confluent. So the difference here is, what's Strimsy first? I'll talk about Strimsy first. So Strimsy is self-managed, right? So this product, we're managing the capital cluster, we're managing everything for you, the brokers, the underlying infrastructure, whereas the Strimsy, you're generally gonna be taking on those operational burdens yourself. Now RedHash productizes Strimsy as AMQ streams. So at least Strimsy is part of the product known as AMQ streams. I'm sure there's some more distinctions to that that the product management team would school me on but generally speaking, AMQ streams is a productized version of Strimsy. So if you want to self-manage Kafka clusters, you can do that using Strimsy or AMQ streams. And then Confluent, Confluent is kind of more in line with what we're using today which is the OpenShift Streams product, the managed Kafka product from RedHash. Obviously Confluent, there's some differences with their service. They've been around longer, this service for us, we've been running it for I think a year. So they've been around longer and they have different features. I know KSQL for example is one of those features which is kind of, I don't wanna talk too high level but it's kind of like, it's a higher level abstraction above streams that allows you to do very cool things with an SQL like syntax. But our service in terms of the interaction model it being managed is similar to Confluent. I can't talk feature to feature right now out of the long discussion. But in general, the model is quite similar in terms of how you would consume OpenShift Streams compared to Confluent. They're both managed Kafka services. So and Strimsy like I said is a bit different, it's self-managed. So I hope that answers the question. But if there's no more questions, I think we're scheduled to wrap up in five minutes or so. So make sure to get your questions in now before we wrap up. Oh, two minutes actually. Sorry, yeah, I need to check the time before I talk. And Keith, I didn't read through your problem. Are you forgetting any further or are you still stuck? I'm happy to talk through that if it would help. So I'm just gonna scroll up and see if I can follow along quickly. Okay, so Keith, it sounds like you're getting, are you running, it shouldn't matter. I don't know if it will matter. Like are you running an ad blocker? So you block or ad block plugin maybe, maybe that's affecting UI somehow. Cause I know it's a front end, right? So it's a JavaScript front end. So maybe some of the, the call, the Ajax calls it's making are getting flagged or misidentified. Oh, perfect. Okay, if you and Bernardo are gonna hang out and do it, that's perfect. But it could be, I think, yeah, I feel like you block or ad block has affected people in the past. So just check that one as well. But usually in the Cognito window, those don't help. Hopefully you guys get it figured out. Other than that, I think it's time to wrap up. So I hope you all enjoyed the session, hang out in Slack with us, ping us on Twitter, LinkedIn, if you have any questions or whatever channel you want to reach us through. All right, have a good one and thanks for coming.