 So today I'm going to talk about how to build an even-driven architecture. How many of you know what an even-driven architecture is? OK, you're welcome to the stage. All right. OK, so I'm going to look at how to build an even-driven architecture using a bunch of different technologies. I'm hoping I'll be able to demonstrate some of that most of it. And if time permits, we can do audience-based demonstration as well of the demo that I have. So what we're going to look at is what is EDA, what are the components which you can use to build out an even-driven architecture, and a demonstration of that that I spoke about, and where I can learn more. And the demo that we have today is something which you can build in your own environments and try it out. We have the source code, the Ansible scripts for you to try it out, and the entire gamut of resources for you to try that out. OK, what is an event? This is the grammar I searched on Google. What is event? Meaning of an event. So an event says anything that happens or takes place, especially one of importance. And today is an important event for us because you're all here, and we are here to discuss certain things for me to learn from you and for you to learn from me. And it's like, basically, collectively, we are going to be learning from each other, right? So this is an important event. So what is very important here is the term importance. Many things happen around the world. Some of them have an impact on something else, or you react on a particular event. So from a software parlance, from a technology standpoint, what is event? And event is, again, as an occurrence. The source could be an IoT device, could be somebody speaking, could be another software system which is emitting some sort of an event or an action, could be manufacturing something going wrong. Basically, we would track certain things, and then that event originates typically asynchronously because nobody's going to wait for something to finish, right? It emits an event, and then the system is waiting to respond to that. To build an even architecture, again, I went out to Gartnor. This is what Gartnor has to say. It's a design paradigm in which a software component executes. It is just idling over there. It executes in response when it receives an event notification. Because I think there are some students as well. It'll be good for us to start from a foundation. You have a system emitting an event, and you react to it. Or could be behavior capture. How many of us have spoken and discussed something on the dining table, and all of a sudden, your phone starts telling you. For example, the other day, I was looking at some shoes or something like that. And then all the advertisements, everything start coming as shoes. You want red shoes, you want blue shoes, or whatever. So the whole world is connected to that. It's a little scary, but I get my point. In terms of the behavior of users, or behavior of human beings, the way we interact with systems, or the behavior of other systems, behavior of IoT devices, or any of those edge use cases as well. Then we talk about in a CQRS system. CQRS, the way of keeping a querying of a system aside from the actual updation of a database. Think of a system where you have a database. You don't want to give access to the entire CRUD operation on a database to another system. You may want to keep aside the querying aspect of it to the updation aspect of it. So in this case, what happens to querying typically queries from a cache rather than having to query a database. So if there's a change in the database, you can use event-driven architecture to update the cache as and when a change happens to your database. And there are so many other use cases over here, auditing, streaming between data centers if you want to synchronize data between different systems or different data centers altogether, or if you want to receive events from the entire hybrid cloud deployed across different locations, even geological and geographical locations, and then making decisions out of that. So these are some of the use cases. I'm sure there are tons of out there which we won't be able to cover all of them anyways. So to build a real-time event-driven applications, this is how I see it. So you have a producer as in somebody who produces events. And then on the right hand, you have on the left-hand side, left-hand side. Yeah, the left-hand side, you have the producer. And on the right-hand side, you have somebody receiving that event and doing an action on top of this. The action could be analysis, could be prediction, because the world is full of AML these days. You open anything. OK, you have to have AML. So today also, we will have some AML for the fun of it. OK, so you have a producer, and then you have a consumer. He receives the data. There could be some transformation of the data involved. And then you make some decisions out of that. The decision could be store the data, pass it on to somebody else, or could be any of that, right? Or make some business decisions as well. It could be a number of things. And in between these two, what we want to do is bring in a streaming platform. Now, a streaming platform, it acts as a data store and not like a database store. It is more of a streaming platform. It can handle concurrent requests coming in, and it can very easily pass on the information to different consumers. So we introduced a streaming platform. And then between the streaming platform, the reason I'm looking over there and here is that here, I cannot see the background of that at all. It's fully white, so I thought not something went wrong with the slides, but you can see that over there. OK, then we introduce a way to be able to write into the streaming platform and to read from it in an easy way. Because imagine you have us having to learn. I think we have many developers here, many coders here. Us having to learn how to write into a data store, how to read from a data store. In this case, I'm talking about a streaming platform, how to write into a streaming platform, how to read from a streaming platform. And that is going to take a lot of effort, because the data sources are varied, and the data process also varies, heavily varies. Could be multiple different systems who all need to understand each other, and that is where connectors come into picture. So you have source connectors and sync connectors. If you want to get your head around that, I'm sure most of you understand this, but this is how I see it. In your kitchen tap, you open a tap, the water flows in, where does it go? It goes into the sync. So your source is from your water, the tap flowing through. A sync is your outlet. So input is your data on the source connectors, and your output is basically a sync connector. It flows through that way. So you need connectors which can read from your source, push into the data streaming platform. And again, from the data streaming platform, the data will have to be read from the data streaming platform from your consumers as well, okay? So you will need an easy way of managing all of that. Then when we're talking about different systems now, are we talking about one data source here? No, we're talking about multiple data sources, and we're talking about multiple consumers. What does that actually mean? Is that all of these systems need to understand each other? How many of you have heard of open API specifications? Okay. We have Java SDK, right? If you take it very simply. If there is no SDK available, or if there's specifications available for us, we will be extremely difficult for us to write code because we don't understand what the compiler will know, right? So similarly, if when two systems want to talk to each other, they need to understand what is the data format going to look like? So in the REST world, REST API world, it is the specification called open API specifications, like a typical standard format of this, how your JSON or your XML file or whatever, how your data structure is going to look like, that is called an open API specification. Similarly, here, in an even-driven world, these are all asynchronous, but still there are systems which need to understand each other, and that is where a schema also is important. A schema is nothing but think of it like a grammar for your particular data that you're going to transfer. You say that the outermost element is going to be address, the inmost, the inside address element you may have, like street and house number, and all of that thing. So basically, you define the structure, often, of a particular data format that you're going to be transferring from one system to another. So ideally, before you start of building all of this, you would have to build a schema so that everybody understands what is going to be transferred, because see, in the end, the data that is produced on the left-hand side will have to be understood by the consumer on the right-hand side. So basically, to make sure and sure, they all understand, we bring in a schema, and this schema is called an async API. Async API is okay. Now there is a way for us to be able to test to ensure that the data that we produce and the data that we consume are adhering to a particular standard, which is based on the schema. So we bring in that particular schema as well. So why I'm talking about all of this, some of these may be new to you, and within 45 minutes, it's difficult for all of us to both explain and gather, but once you understand certain concepts, whenever you have a need to build an architecture like this, you would know, okay, I'll have to consider these elements so that you can go and refer back to it. Okay. So, but the thing is, we spoke about data source, then we had both spoke about data governance, we spoke about data processing, data analysis, but an even driven architecture is not just about data getting passed on one thing to another. We have to build data sources. We have to build those data processes. We have to make sure this data flows through all of them, and that is why you need a way to develop all of these different kinds of applications, and all of these applications are polyglot these days, right? Nobody just writes in Java, or we also spoke about Quarkus, but nobody just writes only in C++ or even Node.js. Any architecture is full of different kinds of technologies. That is where the Red Hat application foundations provides you a great set of components for you to build your entire architecture, right from your application run times, building your APIs, exposing and managing your APIs for events and messaging using Kafka or ActiveMQ for your broker perspective. We also spoke about serverless today, right? Somebody spoke about serverless. We'll have a quick recap of that going forward, and then of course we have to consider the entire tooling aspect of it. Okay, so if you look at this entire application architecture structure, this is how if I overlay the technologies that we offer from Red Hat in terms of how it overlays on these technologies, these slides will be made available to you. So I will just go move forward on top of, I may just go to the next sections now, but right from producer's perspective, writing your applications, or even connecting to those devices in terms of your MQTT or REST APIs or any of those protocols, or even from a consumer perspective, your Quarkus or serverless or Python code for you to access all of that. And from a streaming perspective, OpenShift, sorry, Red Hat MQ Stream, which is based on Apache Kafka, all of this, and then the connector's perspective also, we have technologies which help you to build all of this. Is this all that is needed? I would not say so. We will need other components as well, but this would give you a good fundamentals to get started. How many of you know, I've heard of Kafka, used heard of Kafka? Okay, excellent. So I don't have to go to own. I had one slide called Kafka 101 in case people are not sure about what Kafka is. Okay, so when we're talking about Red Hat portfolio, there are a few things I wanna talk about. Application Foundations helps you to build your applications and OpenShift AI, the next talk, Ritesh would be talking about how OpenShift AI can be helped to extend this particular demo that we are gonna look at. Okay, so we'll start with, there are smaller components. Okay, there are two or three technologies I just wanna very quickly touch upon because the demonstration is based on this. So the first one is data streaming using Apache Kafka, and the Apache Kafka at this point in time will be running on the demo that I have runs on our Enterprise Kubernetes platform that has Red Hat OpenShift. So that brings in two things over here. Okay, Kafka 101, anybody wants me to talk about this? You don't have to feel shy. We all start at some point. Basically, have producers, consumers. You have a Kafka cluster, multiple brokers, if you want. Typically you would have at least three brokers, like your HADR, three being the magic number, the minimum of three, and it has to be odd numbers, right? That's what we've all learned. And then you have multiple topics, okay? And then from the consumers, it could be any number of endpoints, could be a dashboard, which we will see today. Could be data stores, could be like triggering alerts, saying that some security breach has happened, and so on and so forth. And what does Red Hat bring to the Kafka world? It is that, and when you're building an even driven architecture, just Kafka is not enough. Like I told you, it's a huge, it's an entire ecosystem, right? You need to build applications also. After you stream the data, what do you do with the data? That is very important, right? So we're talking about using Quarkas as a reactive framework. And then Strimsy is a CNC of project, initially originated from Red Hat. It's a contribution into CNC of project. And Strimsy helps you to run Kafka on OpenShift easily. So we have OpenShift operators, which help you to build Kafka and run it and manage it easily. And the next component, the recipe, in this particular recipe that we're gonna have a look at is OpenShift serverless for hassle-free venting. The thing about OpenShift is OpenShift serverless, there are two things, okay? One is called serving, there are three things, but we will focus on serving and other called eventing. Now, OpenShift serving, for people who have, the reason I was asking who has worked on building applications for an OpenShift is that, you would have seen a deployment YAML, you would see a route YAML, you would see multiple YAMLs while building an application. But in this case, if it's gonna be a K-native friendly service, this is all you would write. So you would have, say that right on top, you would say the API version, what kind it is. It is serving.K-native, and then you give your container image name, and that's it. And if you need to do this, what happens is that automatically, a deployment gets created, the corresponding route and the service and all of this created, and then you also have the thing of scaling to zero. So obviously, in terms of your, in this day and age, when there is pay as you go model, saving upon infrastructure costs, it becomes very critical. And you can also connect the K-native serving to scaling as well, not just scaling down to zero, but also scaling up depending upon how much of CPU or memory usage that you have. Basically, you can attach it to certain metrics in your cluster to be able to scale that as well. The next section about K-native is about called eventing. Now, today's, we are event-driven architecture is the current topic, right? So we're gonna be looking at how the K-native's eventing helps us to build out an event-driven architecture. What eventing helps us that? If you can see, just, oh, it works. Okay, so eventing, how you have a source, you have a broker. A broker, I think all of us know what a broker is, right? It basically, a broker help, is like an intermediary for you to action on something else. So when the date, events flow from, so events flow from sources, two, one, two basically means that there are different types of events. But goes to the same broker, but depending upon the number against each of them, think of it like a type, okay? Then different triggers are applied and it is applied to different things that those are your consumers. So depending up when a Kafka message is flowing through, okay, maybe it's slightly difficult to visualize, but when I show the demonstration, I will show you how a particular event looks like. So when a message flows through, each message will have a tag saying that this is a type A event, this is a type B event, but all of them flow through the same broker. Now what this broker does is, looking at the type of the event, it can trigger different endpoints. So it's called a broker plus trigger, you know, based off the K-native pattern. Now OpenShift serverless, like you know, Red Hat is completely, everything is open source for us. We contribute heavily back into open source and whatever we develop also is open source. And of course, so OpenShift serverless is based on K-native. Now again, we spoke about K-native scheduling and serving and K-native eventing. Okay, now the data that flows through is called cloud events, okay? It's a format called cloud events. Now what are cloud events? Cloud events is like an industry standard. Why is it called cloud events? Because literally across the world, sorry, across the different clouds that we have, could be AWS or IBM cloud or could be Azure or Google cloud or Alibaba clouds, all of these clouds. In the end, a customer builds hybrid cloud environments, right? So there are in a way to standardize the message for across different clouds. So a standardizing literally cloud events produced in the clouds, there is where the cloud event comes into picture. So has it been fully adopted? No, it is a new initiative. I don't know how old it is, okay, sorry. I don't remember that year when it was created, but it is basically think of it like a, the standard that more and more people are adopting, which again of course, the K-native eventing has also adopted that. Why am I talking about this is that the entire system that we're gonna look at is cloud events. In the previous slide, when you saw that the broker looks at the data type, it looks at the cloud events headers to route it to the right system, okay? Now we've spoken a lot about data flowing here, data flowing there, but data which lives in silos, which we don't action upon is not very useful. So when we bring in the magic sprinkles of AML, that is when you're able to make better decisions faster really, right? So you have now data streaming at real time through the Kafka systems, but then now you're bringing an AML to make intelligent decisions on top of that. And in this demonstration, I'm using a model from Huggingface, which does certain decision making for us. In the next demo, it's a continuation of the same use case that Radeesh will be talking about. He will talk about how training the model, how the ML ops, and so it'll be a good segue for you, for you to learn the entire breadth of this particular use case. That is where the OpenShift AI is offering from Red Hat, which helps you with a number of things. I don't want to spend time on this because anyways, I hope you'll be staying back for his session as well. So that has come to a point where we'll start looking at the demonstration, okay? This fictitious customer is something like an Amazon, okay, it's a retail customer basically, trying to sell stuff. And they want to understand whether the customers actually like the products or not. And what they would like to do is start looking at the product reviews, customer's leave, all of us leave product reviews. Again, typically your product reviews are, especially when we're unhappy with something, is when, that's what I have seen. When we're unhappy with stuff, we are very quick at setting up a factory or it was horrible kind of a thing. Anyway, so they would like to understand the user, how users post their product reviews, and they want to do two things with their product reviews. One is to ensure that there is no foul language in the reviews, because we don't want, of course, we don't want abusive language showing up on the website. And the other one is also to understand the sentiment behind the reviews. So for example, the clothing category. How does, how is the sentiment of the clothing category? And why does that help? Think of a situation like this, okay? Where you have stuff like the sentiment rating for, till about last month was great and all of a sudden it has dropped. So you can correlate that drop in sentiment with what has happened from a business perspective. Has your prices increased? Have you got a new vendor who is not producing technologies, it seems, producing your clothing to the standard that you would like to? So you are able to correlate actual business changes from the data perspective and make corrections to it. You may think of providing a discount on top of that or providing, rolling back certain, you know, catalogs from your system and so on and so forth. And this is how they're looking to implement that. You have the product reviews flowing through Kafka and then you introduce AML for moderation and for analysis of the reviews. In the end, the moderate reviews gets displayed on your website and then you also have a sentiment analysis dashboard. Okay, this is like a large architecture but I'll walk you through different aspects of this, okay? So if you look at, well, my, great. So use the submits, the service, I'm sorry, submits a review, goes through a Quarkus service. We all learned about Quarkus. Quarkus provides very, very easy out-of-the-box functionalities for you to build REST APIs very, very quickly. The demonstration that you have here, the entire code base is available to you. In the end, I'll show you your QR code. Just please take a picture of that and go and you can look at the code base at your own leisure. So it submits into the OpenShift serverless. Now, why not directly into Kafka? To submit into Kafka, you should understand the language of Kafka. We will need to know how can you write to Kafka, what should be the message code look like, how should it produce and all of that. Instead of that, what serverless does with using the, remember I spoke about sources and syncs and brokers and all of that. All of this provide a HTTP endpoint. So as a developer, most of us are very, very familiar with using HTTP endpoint. So adoption of those technologies become really quick. But at the same time, OpenShift serverless does not restrict access to Kafka. So you can directly use Kafka or through OpenShift serverless. So depending upon your particular use case or to the level of expertise your team has, you can define how you would like to do this. So once the messages flow through the Kafka syncs, the messages are actually stored inside the Kafka messages. So this OpenShift serverless acts like a broker between Kafka and yourself. And after all of the messages over here, there's a kind of broker and triggers. If you remember, depending upon the type of a message, it would trigger different AML models over here. These are Python code which uses AML models. One is to analyze your language, another is one is to analyze a sentiment. Once the sentiment is analyzed, the messages are, the sentiment of every message is used as a time series database. In this case, we are using influx DB. Influx DB is not, it has supported, but it is an excellent open source product. So we use influx DB as your time series database and we use Grafana to do a dashboarding of that. And on the other side, once a particular language has been analyzed, the data, whatever has been tagged, that's okay, acceptable language, gets stored into a database. And what to see here down is all the different technologies that are used to build this particular application. So we have Kafka and Strimsy. Strimsy basically helps you to run Kafka on OpenShift. Quarkus services, we use Grafana for dashboard, Knade for your serverless. We use Tecton pipelines and Argo CD for the whole setting up the entire system. Python, influx DB, I spoke about all of this and OpenShift data science or OpenShift AI are all of the components which are necessary to build this out. Okay, now what we will do is let us switch over to the demonstration and walk you through this. So I am going to do, all right. So this is the, this is how I have, sorry, this is the thing. So is it clear kind of thing in the back? Are you able to see the screen? Okay, awesome. All right, so these are all the various components that I have. The web application is built on Node.js plus Angular. I have a product reviews Quarkus application and then it talks to a Postgre reviews database. Now what this Globex web looks like is this page, okay? So it basically lists a list of all of the products it's got and then I can basically of course go around browsing and all of that and then I can log in, okay? I log in using Red Hat SSO. Once I log in, what I'm able to leave review comments. So let us say Quarkus, these are, I click on sub-interview. So now the review has been submitted and let us go have a look at how this must just look like when it gets passed on into Kafka. Now what I'm using here is called a Kafka drop. Kafka drop is just a message reader which helps you to visualize the Kafka messages and first the messages go into what is called Globex reviews. Let us see here. It says the last offset is one. There is only one message that I typed over here and this is the message. So I'm the user, this was the product and this was my comment. Now what is very important? I want you to call out over here is that. If you can see here, you have see this, all the CEs one, C-E-I-D, C-E source, C-E type and all of these are the cloud events that I spoke about. So what happens is that now the OpenShift serverless looks at the message, sees all of these headers, thinking, okay, fine, now there's got a submit review type and it has got a submit review source. Now what submit review event and submit review source and now what should I do with this? Let us go back to the Kafka, I'm sorry, the K-Native service. Now this is the Kafka source. This is the broker. If you look at the broker, there are three triggers over here, right? So this is the broker trigger pattern that I spoke about. So there are three triggers over here. The first one, if I click on this, then you will be able to actually see the details of that particular trigger. Hopefully that is how I show it. Okay, if you can see over here, you would see review-moderated, review-moderated event and so that is not the one. What did we see? We saw submit review and submit review event and this trigger also says submit review and submit review event. There are two of these triggers because we have two models or two consumers. One is for analyzing the sentiment, another one for moderating the sentiment. That is why we have two triggers over here with the same type. So what does this mean? The broker would invoke each of these services, both the services actually, whenever a message flows through. So whenever a user submits a review comment, review on the product page, the data flows through Kafka and the broker reads from the Kafka and triggers these two services. These two are written in Python. So these are what do you call, key-native services, okay? Ideally, you can make it as zero but because it's a live demo, I didn't want to keep it as zero and then wait for that lag. So I just kept it as at one. But if you look at my simulator that I have here, that is at zero. So we will have a look at that as well. So let us now go back and see. Now you would see that particular message has flowed through and it is displayed right over here. So let me do one thing. I'm gonna just hope that can you take out your phones and see if you can scan this? I don't know how long it's gonna take for the, because the network inside the room is quite slow. So if it's taking too long, what I'll do is that I'm just gonna make it smaller and we just keep the, has the page come up for anybody? No? Okay, let us not spend time on that. We can try it out later. I'll keep this thing running for a bit. I'm just gonna open up the simulator that we had, right? Oh, before that, let us have a look at what is actually happening with the dashboard, okay? This is the Grafana dashboard, which is basically right now, I said that T-shirt is brilliant, right? And so that's why it is telling me that's a fantastic job, but we just have one review command. So let us go and simulate some review commands. So the way I'm gonna do that is I have a review simulator written on, written using our K-native serving, okay? So when I click on this, let us go watch this. You can see that this particular port starts appearing. So if you can see this, once the blue circle comes in, which means the port has gone from zero to one. Now what should we simulate this on? Try it out, let us try the catalog clothing, click on execute. Now, Quarkus makes it so simple for you to be able to expose these kind of in a Swagger-based ones with just a couple of annotations. Once you build out, once you build or a Quarkus service, right? Using just, which has got just 10 points. And if you can just, with a couple of annotations, you'll be able to expose that. You don't have to have postman or anything like that extra. You can just, it can appear using the Swagger UI directly and then you can start posting it. So when I click on execute here, you would see that, okay. So, okay, here it goes. The result says 25 product reviews for clothing has been submitted. Let us just click on it a few more times. And then let me pick up, I think we have bags. Let me click that a few times. And then we go back and see on the Kafka messages, okay, I have a bunch of, you can see like, it's really pretty quick, okay. So 100 messages have just gone through in really quickly. And this is not even like fine-tuned Kafka. This is just like out of the box Kafka, okay. And if you look at the other messages, now we saw the messages which are flowing through. Let us also look at the moderated topics. There are other topic topics as well, right? And because I have not introduced any abusive language, you will see the same number of messages right over here. If there were any abusive language, you would of course see less than a number of content which have actually passed the check of moderation. And from a sentiment analysis perspective, let us have a look at this. And again, all 99 has gone through. And what happens here is that we have an additional score over here saying that this particular review text scored was three. So the response was positive. But don't worry, or not all of them are positive as we would actually see in this particular dashboard. So how does this dashboard work? It, again, the data comes from Kafka. The data gets read from Kafka and pushed into using the entire serverless architecture gets pushed into Influx DB. And Influx DB is a time series database which is used as a source for the Grafana dashboard. And this entire dashboard's originator sitting right over here. Ritesh, we had collaborated on this demo. So as you would see over here, the last sentiment was positive, but the last sentiment on bags was negative. I think somebody else is also typing in some comments that's why you see the sentiment going up and down. So 77% is positive, 15% is negative, 18% is like okay dokey kind of a thing. So that is how you can build out this entire thing of, at this point in time we have only tabulated for clothing and bags category, of course you can try it out for any number of other things. Has the screen come up for you all? Refresh? There's answer for everything, including world peace. No? Doesn't come through. You gonna try? Page opening for nobody? That's very sad. Somebody? Anybody? Yeah? All right. So you see those, the Kebab menu, right? The bootstrap typical menu right on top, excuse me. So this one, we expand that menu, click on the login button and use any of these usernames and password. The password is open shift and then use any of these usernames and pick any product and leave a review comment. Let us have a look at, let me see if there are more reviews flowing in. Okay, still don't have any more reviews. Okay, while we're doing this, there is one more inbuilt demonstration here that we have because we have, I thought it'll take a lot more time. So if in this page, I mean the home page, you see one product, where is that product? I think everybody's overloading the system. Oh nice, my system has gone down. Okay, at least I finished my demo man. What happened? Okay, it's a network error. Network error, network error, network error. Okay guys, stop using the system, okay? Let us, okay, some problem. I'll blame it on the cloud, okay? It is definitely not my code because I finished the demonstration, you all saw it working. So the other thing what I wanted to actually show was in terms of showing an event streaming architecture, but that is okay. Okay, the system is up now, back again. So event streaming, there is something called Kafka Streams. Have you heard of Kafka Streams? So Kafka Streams is an API for people who don't know Kafka Streams. Kafka Streams is an API which helps you to do little more complex processing with the data. So what Kafka Streams does is, picks up your data from Kafka and then you can do aggregation of your data and build scores out of it as well. So that is what we have over here, if the page deems to come up. So for example, let us assume I'm liking these products and so I'm basically generating a lot of stuff saying that I love these products and all of this keep on getting pushed into Kafka. The same Kafka, okay? The broker is still the same, but one of the use cases we had used, the serverless architecture, in mother use case we are using the event, it's a streaming platform and if you actually go and look at it, all of them start appearing over here. So what we are trying to do in the background is, using Kafka Streams, we look at the Kafka messages. Let us go back and have a look at the Kafka drop then you will know what I'm talking about. So the Kafka, in the Kafka drop, these are the messages going through the Kafka even the tracking messages. It's a different queue altogether. So these are all the web activities which are flown through and then what we do is that we aggregate all of these messages into using the Kafka Stream and Java APIs to build out the product scores. The product score basically keeps on looking at the products of which are being accessed frequently and builds out the top five products for me and that helps me to build out, not this gateway definitely, but this page. Helps me to build out what are the most like products. So Kafka as a streaming platform helps you with multiple use cases. One of them what we saw was an event-driven architecture perspective which is not just a streaming platform, right? You would need so many more other scaffolding technologies as well. And then we're talking about streaming which is based on Kafka Streams API and also you looked at serverless technologies. We looked at how, let me see if I have Python code. Yeah, this is the Python code which basically it's very simply picks up a, where is that? Yeah, this is the model that we are using actually, right from, this model is right from what's it called a hugging phase. We're just using that model. The data from the sync comes, the Kafka or the serverless architecture comes through. We make a decision in terms of whether it's an abusive language or a non-abusive language. Okay, that's primarily the demonstration that I had for the day and this is the pattern that I spoke about. If you can take a picture of that. Accessing this particular page will provide you with the list of how to set, if you look at this section of it, it tells you how to set this up in your own environments and also gives you a guide in terms of stepping through each of them. So at your own leisure, you will be able to do this. A couple of books for you to read through. Hopefully this will be all available for free or all of the speed is available on redhead.com and on developers.redhead.com. You can have a look at it, just go to redhead assets, then you'll be able to have download all of these PDFs and that's supported for the day. I hope you enjoyed the session. If you have any questions, feel free to ask.