 So this presentation will be about Kafka and AIML. And we have two speakers. First one is Jaya. Jaya is at the Angel Marketing Manager at Red Hat with the application services team. She loves to talk about Red Hat technologies and how they can help customers address the challenges. She's very experienced working with partners as a full stack developer. And the second president presented today is Ritesh. Ritesh is a principal architect at Red Hat at the portfolio and technology team. And he's focused on making applications intelligent and deploying them in cloud environments with DevSecOps in mind. And with that, guys, I'm handing over to you the floor is yours. Enjoy the session. Thank you, Mike. Thank you. Thank you, Mike. Hey, everybody. I hope you're all keeping well. OK, today's session will be here. You're going to talk about how AIML and Kafka can come together to help you build a sentiment analysis application. What we will have a look at today is how AIML, data, and Kafka can help you to build powerful intelligent applications. And then bring an event-driven approach, event-driven architecture to us, throwing in K-native and cloud events to flesh the whole solution out to make it truly event-driven. Then we will have a demonstration at the end of it. We plan to spend a lot of time in the demonstration because we invite you to be part of the demonstration as well. The demonstration is all about how customer reviews on a retail website needs to be moderated, as well as analyzed for a sentiment rating, a sentiment score. OK, before we get to the demo, a little bit of what we call background check in regards to what the solution is all about, how we arrive at the solution, how we pick the technologies, and all of that. We know that data is like gold dust, especially data which is organized and of which you can make sense out of. Data can help make decisions with confidence and helps organizations as well as the external customers. And AIML can help data become more productive. It simplifies the extracting of value from the data in an automated fashion. For example, customers, you can learn about the customer's behavior and thereby improve customer experience. You gain competitive advantage because you understand your customers better. You can automate some of the business processes thereby leading to, perhaps, increased revenue, saving of cost. And overall, you have a much better solution because you actually understand the data you're able to derive a sense out of the data that you have. Here are a few examples of AIML use cases. I'm sure a lot of you here would actually know quite a bit of this or perhaps even help to add to our use cases. Feel free to use the comment section to add your perhaps use cases where you have used AIML and data together. So if you look at it, looking at a few examples, perhaps in retail industry, you may look at use customer behavior and patterns to arrive at particular promotions. Or in a manufacturing plant, the data inference can help you to change processes such that machines which are failing and products which are not up to standard again, perhaps identify use cases such as that. So fundamentally, you get data, you inference that so that you can make qualified decisions on top of that. Now, there is no disputing that AIML and data is very critical for organizations. But imagine what can be achieved if you can be able to respond to the data in real time, or at least perhaps close real time, as close real time as possible. This is easily possible with Kafka, which is a distributed streaming platform. In the last session, I think they spoke about Kafka at length, right, with regards to it being a real time data streaming platform, which can handle very high throughputs. What does this mean from an AIML application standpoint? This means that you can ingest process data which is generated, and then you can make real time or close to AIML applications can make real time or close to real time decisions. And you can make decision making and insights, all of them can be as close to real time as possible. Kafka's distributed architecture also helps you to handle large amounts of data which is essential for AIML workloads. And we also know that data arrives from various sources, various streams. Kafka can act as a central data integration hub, allowing you to collect data from various streams and various forms and multiple sources, consolidate them and use a variety of technologies which can help you to perhaps enhance the data as well to provide a unified stream for your intelligent applications. With even triggering, even triggering AIML processes, you can also react to change the same data and which again help you to make real time decisions. Kafka's messaging model helps you to build, even driven architecture, that much more easier. Of course, data durability, redemption, building up, being able to perhaps rewind, go back in time and all of the features of Kafka can also help AIML applications to access historical data whenever they need them. Now, what we look at even driven architecture, even driven architecture could potentially be a little bit daunting. How do you set up your system? How do you build your application? How do you even connect with Kafka? Do we have to learn to consume and produce through Kafka streams when a lot of application developers have primarily been using Https, the fundamental protocol, right? But so basically, you might be considering that, okay, is it gonna be hard for us to move on to a Kafka-based data streaming software platform kind of a thing. But what, when we bring in even driven architecture based on K-native, based which Red Hat OpenShift server is just based upon, it takes away the semantics of streaming. So as a developer, you can continue to focus on what you typically would know, just using Http endpoints, which is basically what the K-native services help you to do. So K-native service, K-native eventing helps you to bring all of those various resources, the things, the endpoints, all of them in a network of systems which can talk to each other using cloud events. Cloud events is nothing but a protocol and open API specification, if I may, for a transmitting events from system to system. In the demonstration, we will show you how we have used cloud events and how K-native plays a picture, plays a role into the center architecture and how all of this comes together. Now, from Red Hat's perspective, we have, we provide a fundamental platform plus application services for you to be able to build, manage, deploy all kinds of applications, right? From traditional applications to serverless applications to integration APIs, data, and then all of that that we spoke about through application foundations, being one set of technologies and open-ship data science. So you can cover a gamut of, you know, gamut of technologies, gamut of, how do I also work for it? Gamut of things that you need your applications to do, for example, you want integrative applications, you want to build intelligent applications, you want to be able to do streaming, you want to use a camel, you want to do serverless or DevSecOps, CDC, and all of that, right? Because the entire portfolio helps you to build out a robust platform for you to realize your, you know, then end applications on top of this. I would like also like to introduce Open-Shift Data Science, which is a hybrid MLOx platform. I could speak about AI ML. The hybrid MLOps platform is Open-Shift Data Science. It helps data scientists, platform guys, developers, DevOps guys, all of them, bring all of them together to be able to build out whatever you need from an MLOps perspective. Provides your model development features, gives you model serving and monitoring features out of the box, and lifecycle management so that you can have repeatable data science pipelines. And you can also integrate this to your DevOps pipeline so that the models that you will generate and deploy can be delivered across the enterprises. Combining GridHat components, OpenSoftware, your own software, we have a huge partner-based software as well, we can bring all of that so that you can increase your capabilities and collaboration features. Okay, that was a lot of talking, I think seven more minutes of talking. So we think that a demo is worth, we can count the zeros there, but many number of words. What we are going to introduce today is a fictitious retail company that we would like to call GloVex. Now GloVex is on a digital transformation journey. They have experienced great success with their microservice architecture. They have introduced streaming technologies for certain areas of their business. They also have partners and they have APIs and all of that. Now, they also would like to understand more about their own customers. What are the customers like? Which sections of their, what do the customers want to see? At different points in time, for example, we introduced new products last month. Are they performing well? Or is a customer sentiment going down, going up? Is it just the same, the various pricing points that is that making any effect or impact on the actual business of it, right? So what they would actually like to do is have a look at the reviews that customers post on their website and then do things right. Once you introduce a review of the products, you of course would like to moderate those reviews to ensure that there is more use of language or even look at perhaps a certain private information should not be shared online. Then also look at building out the sentiment analysis dashboard, looking at the language of the sentiment of the reviews and come up with a score on various catalogs of the various categories of the products that this global customer offers. And so they bring in Kafka and AML to build out a platform where the reviews can flow gently down the stream till they can readjust the dashboard and start showcasing the dashboard. This is like an overview of how we have brought in all of the pieces together. So starting off with right over here, starting off with an application and the user provides a review, the sub-interview, the review flows as HTTP. Like I said, even though it is going to be consumed by Kafka, it still flows as HTTP because of the underlying K-native framework, K-native eventing framework. And once that gets into the AML Streams platform or it gets into the Kafka platform, you can have multiple consumers. One of the consumers is going to start analyzing this review and then pushes that information into what time series database. We have picked InfluxDB as the time series database and once that is passed through InfluxDB, it creates all of those buckets and all of those spares and all of those points. Then that information is used by Gavanna to build out a sentiment dashboard. In the meanwhile, in parallel, because I cannot show it in parallel, I'm showing it sequentially, but in parallel, what also happens is that the same message right from here, it passes on onto analyzing of the service, if the language is acceptable, it gets processed and then you can start seeing that message on the stream. Now, that is the application architect's perspective. Building out an architecture, especially an AML, plus intelligent applications, when with all of these services and microservices in picture, we have different personas here. We also have, on the right-hand side, how a data scientist would actually build out these analysis, analyze the services which can analyze the sentiment as well as analyze the language of them. I invite Ritesh to please help us to describe that section of that. Thanks, Jaya. Thank you, Jaya. Yeah, I hope, yeah, just my screen will come up now. Mike, yeah, there is it, right? So this is where Jaya actually brought me in as a data scientist. And like we are going to wear different hats here. I'll be wearing data scientist and Edwin hat here. And so this is all about data scientists. So you are the data scientist, like, go to... So we have, as Jaya mentioned, we have read it OpenShift data science, right? So that's where as the data scientist, I will actually go and start, let's say, a server, a Jupyter Hub, for example, in this case, I'll work through, I'll do train my model, I'll build my model, and then I will kind of like push that model into our repository, right? So that's our intelligent app source code repository here. Now this source code repository, now once I check in a new model, I'm trying to build an MLOps here, right? Using GitOps methodology. So the moment I check in a specific new model here, the whole Tecton pipeline, OpenShift pipeline, kind of kicks in here, it gets triggered, and then it clones your repo, it builds the new image, it actually scans for a lot of things like, it will scan for vulnerability, whether it's coming from the right developer or not, whether it's a trusted source, trusted party or not, right? So those kinds of things are done at part of the image scanning. And then we tag the image, that image kind of like, gets created and tagged and we store it in our query image registry, right? And then we also push it into Dev environment. So that we test that everything is working fine in my Dev environment, again on an OpenShift platform. Now this is the whole continuous integration pipeline, right? And at the end, I do not push anything into production here. I actually create update manifest, which is managed by Argo CD separately as part of the continuous deployment or continuous delivery as part of GitOps process, so that I completely isolate my Dev process, which is CI, with a continuous deployment process, which is to Argo CD. Now what Argo CD will do is, it will say, okay, here is my manifest for the prod environment, which is kind of like different than what now I have actually deployed in prod deployment, right? Because I changed the image time because there's a new image in query registry, which is being managed, and which is actually being changed in my manifest repo. So the Argo CD will do a reconcile loop and see how there is a change in the image, and it has to move in a new image. That's a new model, which gets checked in into the production environment. And yeah, that's the whole life cycle, which you don't have to manage as a data scientist. You just have to manage your specific development, and that kind of kicks in the whole gamut of CI CD process and brings in the MLOps, which everyone is talking about nowadays, right? So let me show you quickly on the rods, the Red Hat Openship Data Science and the Jupyter Hub, like what I was talking about. So here I have this Red Hat Openship Data Science configured here on my environment. If you see here, I can kind of like go and enable a specific Jupyter Hub. I can launch a Jupyter Hub here, right? Which we have already have predefined images, and you can of course, bring in your own custom images well here. So let's say I create a standard, and then I just say start a server. And this will actually start a specific server for me, and it will keep going on. Now I will just cancel it for the sake of time, because it takes a minute to get the server up. But yeah, there are a lot of things you can explore. There are a few things which are already there as part of the applications here. And there's a whole lot of marketplace here, which for the Openship Data Science, so you can do a lot of things. You can actually set up a data science project. You can work with Qflow along with Integrated Tecton Pipeline. We can actually do model serving here. So you can just check in your model code. It will be served for your applications, right? So that's something which is possible here in Openship Data Science. And of course, like Jay mentioned, it's managed services or on-prem, right? Both options are available for you. So there are many, many things which you can do here. There are a lot of resources available for you to kind of like walk through and see how you can use Lord's platform to actually do things which you want to do as part of your AI ML application specific requirements, how quickly you can actually onboard your data scientist to quickly move on to actually hosting your models, right? And then also managing the MLops for your models. So how quickly you can kind of like move a new model into production through your, right through sitting through your Jupyter Hub here. And this is the Jupyter Hub, which I already had created one. So here you actually clone your repo, which I have done. And then the moment you do that, you get a pre-configured image specific Jupyter Hub here. Let's say if you want to do a PyTorch or you want to have TensorFlow, you have those images there. You just start the Jupyter Hub, mention the GPUs you want to, right? It will actually set up GPUs as well for you. If you want NVIDIA GPUs, it will set it up for you as well. And then you kind of like start with processing. It's very simple. Like you can do a lot of things here in Jupyter Hub of course, the data scientists know. It's something an equivalent of VS Code, which generally an application developer uses, right? In their environment. Of course, I know that a lot of data scientists use either of that, but this kind of like gives you a very good flow in terms of how it works. And this is my basically a process, which is a sentiment analysis where I'm using hugging face based algorithm for doing sentiment analysis here. I've seen here, this is the model which I call in, through, it's a bird-based multilingual uncased sentiment which analyzed based on the text. So you will actually be participating in here. You will actually be sending in the text, the comments on the products through the fictitious company UI, which we have. And that will actually be going through this particular model, which is hosted on OpenShift and will actually process that. It will also analyze and moderate the language as well, right? So we have multiple Kafka streams where the data gets pushed in and then it's pulled again by the Influx DB, which kind of like goes and populates everything into graphite and dashboard. So we'll talk about that as well quickly. Yeah, so this is what I wanted to show from the Amalop's GitOps perspective, as well as how the data scientists can actually play their role here and it becomes their life and makes their life very easy, right? So over to you, Jayan, back on the application side discussion again, before I actually walk into the Grafana and Flashbox. Let's let Mike quickly get the dashboard screen for you. Cool, thank you so much, Rajesh. Now we switch hats again and we look at half the center architecture has been put together, right? All the application side of things. How do you bring all of them once the model has been deployed on the self-same OpenShift? OpenShift can provide you a platform for you to build out everything that you need to realize this sentiment analysis application. So what do you have here is the AML applications running as services, as a K-native services, which basically that support this 100% and all of that shows. That means that you can scale up, scale down to zero and all of that. And if you look at these, these things are the ones which are again part of the even-driven architecture of the K-native family, which helps you to talk to Kafka through a broker. So basically have a Kafka source, you have various things through which you actually talk to Kafka, not necessarily having to speak to Kafka directly. But just because we're introducing the K-native services here and the K-native eventing does not mean that you cannot talk to Kafka directly. You can have different use cases which won't be able to talk to the underlying Kafka streaming platform depending upon your developers, depending upon your architecture needs and all of those parameters for a particular application. All right, so then we have a database in this case, we are using a Python service. And as I said, this application is going on for a while. So there are different features of this application. So that is why this particular project within this particular namespace within the sentiment analysis namespace only shows the services which are necessary for us to build, excuse me, which is necessary for us to build analysis or product reviews or moderation. There are so many other services that we are using from a service perspective. The global services got all of the underlying database for your category, inventory, typically everything that you would need for your application. And then you have a middleware project which basically has got everything that you need for building out your, everything that you would need for building out your Kafka services and all of that. All right, so now that we have seen this, I'm just gonna launch this application here which I have already done. This is how the page looks like. And then as every each page would need you to provide a review and submit a review, but for that you would need to log in. But for this purpose, we are actually going to ask for your help here right now that I want you to be able to please fill out your phones perhaps and scan the UI with your phone or I can even share the URL as well. Yeah, I shared the URL in the chat so you guys can just check it out. Yeah. You have to use a name. You can just launch that URL. Let me just post one with the URL separately. Provide any one of those user names and parts for it and then you can log in. Once you log in, try to start typing and all this thing. So let us have... Yeah, I want to show the blank screen of the... Yeah, actually, yeah. So only the blank screen started filling up with some data. Okay, great. So I'm gonna stop sharing my screen and hand it over to you Rakesh. Thank you, yeah. So yeah, I think let the screen come up. Yeah, there it is. We are having a few guys who are actually pushing the data in here. So once you log in there, onto the screen, you select the product and just comment like whatever you want to comment, good, bad, right? And then it will actually pass on and go to a topic called Globex.Centiment. Oh, there it is, right? See, I mean, yeah, cool. So it's a live demonstration. So this actually talks to me, the Globex Customer sentiment gives me overall sentiment for all my products in my organization, right? So it tells me how many positive sentiments there are negative sentiments and the neutral, right? So this talks about the whole product, all the products as an organization, right? How customers are feeling about our organization. Now, coming back to the category, right? I have different set of categories here like clothing, mags, and I see that most of them have actually given a feedback, customer feedback on clothing, right? For example, so if I zoom in here, I see that for the clothing category on this row, I have a lot of requests in the comments which have come here, right? And that's kind of like shows live in Grafana here. It shows me how, what is the sentiment view? What is the last sentiment for a specific product, right? Sometimes I just want to know what is the last sentiment. Then again, this is for the bags part here. So bags right now, we don't have much to show in this case, but let me just go back to the five-minute spot. Yeah, here it is, right? So these are for the bags here. So I see there are a few sentiments which are cooking. So some of the customers online are actually, some of the participants online are participating and giving huge feedback here, right? So we almost have like 46 plus seven plus three, right? That many reviews have come in here and through this live session. So thank you all for participating and providing your feedback here. That's great. So this is the dashboard. So what it does is it actually, if I see here, there is all the Kafka topics which I have, right? So all the inputs goes into this Kafka reviews topic, right? So this is where everything comes in, right? But it's a bad language-based message. Everything comes in here and it actually gets analyzed by the, there are two machine EIML models which I have, which kicks in here and there is one which does the moderation. There's another one which does the sentiment analysis. And it actually kind of like pushes, if it is a bad language, it gets into your reviews denied topic. If it is something which is like an okay language, it gets into moderated reviews and then the sentiment of the specific message gets into reviews that sentiment, right? For example, in this case, it goes here and then you can see these are all reviews here. If I select a specific message, it kind of like tells me or this is a, it actually gives me a score and the positive or a negative scenario, right? So that is the score which you see here as well for the, for some of the comments. For example, if I go to loading here, you see there is a three, there is a zero and there is a minus one, which is a negative comment, right? On the specific product sense. So that's how I can actually have a dashboard. I have a good view of this. And then KafkaDrop actually tells me what kind of messages I'm coming, which topics are working. And then you also have another observability functionality, right? In OpenShift, sitting in OpenShift. So you go to OpenShift, there is metrics, there's dashboard here. So I go to dashboard and I kind of like get a populated dashboard here, right? So for example, these are all the API performances which it shows on my OpenShift. Let's say we are doing an eventing-based, cloud event-based triggering here, right? So broker trigger scenario. So you see here, I have a lot of events which are coming in here and it kind of like shows me ideal time, how it's performing, right? How about the ingress part with the event current by response code here and this is by the event type here, right? So I can actually select a lot of other things. For example, it's a Kafka sync here, see? There are a lot of event counts by event types again here as part, because we are actually using, as Jaya mentioned, we use a Kafka sync where the data gets entering, right? As part of the comment you push in. So that's the Kafka sync and how it's behaving right now. So you actually see this in real time and not only this, right? But you also can integrate this with OpenTrace and have all your developer doing tracing, right? For their own specific application in terms of how it's behaving at a deep dive in the code. And also you can use like Eager, for example, for doing the OpenTracing. So, and yeah, another thing I wanna show you here is that we actually have all our diplomats sitting right here. We have influx DB database, Grafana and there is a connector which actually copies everything from Kafka to influx DB. And then we have the sentiment analysis. So this is where we have our moderation program codes running the product reviews, which are part of the simulator in case if you want to simulate your data here. So this is my moderate review deployment and this is my sentiment analysis deployment. So if I select this and I go to the pause, right? And the pause will actually show me what kind of events, whatever is getting into Kafka, right? And then being analyzed before that, that's end the outcome. That's all given generated here as part of logs as well. So that was all about how an administrator can actually view this. And that's what I wanted to show here and moving it back to Jaya for the collusion notes and maybe Mike can share slides for Jaya as well. Thank you. Jaya, I think we lost your voice. Yeah, that was fixed by a click of a button. I had muted myself, thanks for that. Oh, what I was saying is Rafael, I'm sorry that the pay was kind of slow. I hope you read the answers later. And so just one thing I would like to add from an application perspective is that like I said, each of these events are being shared from one system to the other through what is called cloud events. And that has been achieved as a header for each of the messages. So if you can see right over here that the CE source is, I'm the CE type and the CE ID timing and all of that helps you do to trace back your messages and also to be able to deliver your messages where exactly you should go. And if you go back to the architecture or the deployment early, if you look at the broker, it tells you each of these broke events, each of the splitters respond to different things. For example, the source and the type again gets matched and depending upon this, the source and the type, the various services get called. You might be wondering why there are so many messages because that is because we did generate a few of them for us to be able to show you how the Grafana dashboard can, you know, close to real time, pull all the data, crunch it up, pass through them, then right on to going and showcasing on the real dashboard, right? All right, okay, so with that, we kind of come to the end of those discussions. What we would like to leave back with you is the code base and the instructions for you to try this out. We can take a picture of this and perhaps we will leave a link also to this page. Do we have a link? We have handy for this solution pattern. If you could stop it over on the chat window, that'll be great. So you can, the deck is right over here. Let me have a share to the link to this deck as well. Let me take that out and share it with you so you can actually go ahead and access this, this deck as well as the instructions and all of that. So here you go. And this deck has got information, obviously the deck itself. And also it has got all of that, even that you need to try to have your own environments, the sketch, the code base, the images and all of that you can access it to that. And to learn more about data sciences, we already said we have a sandbox and everything else, all of that English, right? Okay, so that's about it. Yeah. So if someone wants to deploy it, they can just go ahead and deploy this in their official environment. Yeah. I think it's very important. Yeah, plus you have an Ansible Playbook. Just from that, you can try out the whole thing, instructions and all of that is there in that role. So questions please. I think we have five minutes before we close this. Yes, that's right. Thank you very much for the presentation. We have a couple of minutes left. So I don't see any questions at the moment in the chat, but if there are any questions, feel free to post them now quickly before we wrap up the session. And in the meantime, let me thank you, Jaya and Ritesh, for this presentation. Perfectly delivered. Thank you very much for that, also for the demo. Like I said at the beginning, we will share all the slides and all the materials if it didn't get the chance to take a photo or take a direct of the link. We will share that out, no problem. What's next? In about 10 minutes, we'll have a break or you can join the chat. There will be a chat discussion where you can participate. And then half an hour later, the keynote of the definition will start. So you're more than welcome to join that too. So I don't see any questions. So I guess we just wrap it up. Again, thank you very much, Jaya and Ritesh for the presentation. Thanks everyone for attending. And I wish you all a great event and a great day. Have fun. Thank you very much. Thanks everyone. Goodbye. Have a good day.