 Hello everyone. My name is Karan Singh and I'm a senior architect at Red Hat Storage Business Unit. Today we are going to demo a Twitter streaming and sentiment analysis app showcasing technologies like Red Hat A&Q Streams which is Kafka, MongoDB and a few others and they are all backed by OpenShift Continuous Storage running on top of OpenShift Continuous Platform. The ingredients for the demo is of course OpenShift which is the base platform we are going to use. Storage is provided by Red Hat OpenShift Continuous Storage. The app stack looks like A&Q Streams Kafka for stream ingestion. Python for the backend API that we have used, JavaScript front-end app and MongoDB for NoSQL data collections. Externally we are going to use Twitter feeds in real time and we are also going to use island service for text processing. Under the cover, the app looks like this. So through our front-end, a user comes up, hey, I want to just text-map to the text analysis of my Twitter keywords. So here are my keywords. He will ingest those keywords into the front-end app which will trigger the backend API which will compact Twitter and it will start filtering those tweets as per the user's request directly into A&Q Streams which is backed by OpenShift Continuous Storage. A&Q Streams will persist all the tweets in real time. The second action from the backend API is to trigger and move the tweets from A&Q from Kafka topics directly to MongoDB which is a NoSQL database running on top of OCS. The third action involves rendering some charts. We are going to see this in a few minutes. And the fourth action from the backend API involves connecting to island service for some text processing and a few visualizations using join.js. By the way, this all thing, this all demo, you can do it at your convenience. At the end of this presentation, I'm going to share the GitHub or the GitLab URL that you can use to demonstrate this for the customers, for your friends, or in community events. Under the covers, let's talk about how OCS is providing persistent storage across this demo app. So starting from Kafka cluster, we have three part or three node Kafka cluster. Kafka is backed by PVs from OCS, RWO PVs. Kafka needs ZooKeeper, so ZooKeeper needs storage. So we're also providing PVs to ZooKeeper cluster through OCS. The third element of the distributed messaging service is monitoring because monitoring is key part. So Prometheus and Grafana both, they require some sort of storage. So we have provided PVs to Grafana and Prometheus. Finally, the database service, which is our MongoDB, it also requires persistent storage to be fault tolerant. So we are using another PV for the database service. So this is how we're going to, we are using, we are relying on OCS in real world kind of application to provision storage, provision persistent storage. The deployment steps looks like this. We'll first verify some prerequisite checks like, hey, my OCS is healthy or not? Do I have OCS or not? We then start by deploying Kafka service on top of OCS. We'll then move on to deploying database service MongoDB on OCS. The third, fourth step is to deploy the backend DPS service, which is in Python on OpenShift container platform. And then we'll deploy a frontend element to our backend based on an HTML.js on a CD. And finally, we'll do some interaction with the app. All right, so demo time. Let's do it. I'll switch to my dashboard. This is my OCS for the two dashboard. And step number one is to verify the storage. So it's doing good. So from here, cluster is healthy, storage cluster is okay. So I'll real quick to be in tune to my CLI in here. So let's go into the project and run just one or two commands to make sure my cluster is doing good. All right, so we are into the pod. And yes, my cluster is healthy. And I do have like six OSD set cluster. Everything is okay. So cool. We then verify our storage classes like we need to make sure stuff or OCS is the default storage class. So OCS should tell me my default storage class. And that should be OCS. This should be here in a moment. And yes, I do have OCS storage cluster is my default storage class. Good. So check is completed. So let's move on to the next section of deploying the distributed messaging service Kafka. I'm going to start by creating a new project as any other or any developer will do. So let's create a project and now we have it. We also need to verify. Do we have a Kafka operator in this project? Installed operator, we don't have any operator. So we'll quickly go and install the Kafka operator. So we'll go to operator hub, go to streaming, we're doing streams. It says that it is installed or probably installed on the across the cluster. So I think we are good here. So let's move to the next step. We'll not deploy the Kafka cluster. So OCS apply and the Kafka pointer file will be here. So I'm going to pause this video just to save time. We'll be right back. So my Kafka and Zooker clusters are up and running. The next step is to deploy Prometheus and Grafana dashboards. So we'll see the plan on this now. Prometheus and once it is done, we're going to deploy Grafana. So Prometheus and Grafana will provide us the monitoring capabilities, fetch the metrics from Kafka and do some utilizations. So this should take time. I'll save this video. So now we have Prometheus and Grafana services up and running. And as you speak, we have Kafka cluster, we have Zooker cluster, we have Grafana, we have Prometheus. So let's verify how many OCS PVs we have spawned so far. So I'm switching to my thumbnail. And as you can see in this project, we have the storage class set to SFRVD. And we have three PVs for Kafka, three for Zooker, and two for Grafana for data logs and Prometheus. So these are PVCs. If I check for PVs in this particular project, we have, it's the same output, but this is PV and the other one is PVC. So I think we are all set here. Next step is to link Prometheus and Grafana as a data source and add few dashboards to it. So I'm going to execute my script, which should do this for us. And yes, it's creating a data source, data source is done. So this should take a few minutes. We're back. So Prometheus and Grafana linking has been completed. We will grab the route URL to the Grafana instance. And once we browse that in the browser, we should be able to see Kafka and ZooKeeper dashboards from in here. So this is my ZooKeeper dashboard. And the previous one was the Kafka dashboard. So it's live data coming from the clusters. So we are completely deploying the Kafka service together with monitoring. The next step is to deploy the database service with MongoDB. So we'll create a template of OpenShift. And using the template, we will deploy a MongoDB service. Get the template. Yes, the template is there. We're going to deploy a new app with a few parameters, which will launch our MongoDB service. So OCNU app is the command to do it. And I've used it with my databases in the password. So the app is there. We need to expose this app so that... Oh, no, we don't need to expose it because it's a data server. So let's get... And we should show our MongoDB deploy app. After that, we should see the MongoDB pod itself. So MongoDB is... This is now coming up, which will provision OCS PV. So once MongoDB services up, we should see a BBC claim from MongoDB, which is against my OpenShift content storage. So yes, MongoDB is now using OCS. Next is we will exact into the MongoDB pod and try to run... Basically, try to connect to our database and just add some some records just to verify things are doing good. So I'm connected to my MongoDB with my username and password. The next step is to write something, a single key value store to MongoDB. And the record is written. Let's try to fetch the record just to verify everything's okay. Yes. So I can read and write to my MongoDB instead. So good. Let's move to the next step. We will not deploy our Python backend API service. To do that, you verify the ML file, which is a simple OpenShift-based ML file, which pulls my content animation from Docker. It does set a lot of parameters, which is for Twitter, for Island Service, and MongoDB instances, like when we are to connect for MongoDB and we are to connect for Kafka. This is all preferred here. So this is basically the file. If you're doing this, if you're following this demo or if you're doing this demo, you need... This is the file and this is the section you need to edit with your keys and with your details. So let's go and run this command. So OC apply. We will apply it and we will expose the vector service. Service is now done. OC can pause. Should be launching our part. In the background, what we can do is we can start listening to the pre-part. So the container is coming up. Let's switch to the other shell and scale the logs of this backend service. So our backend service is up and running and these are the logs of the backend service and it's listening on port 8080. And so is the container. The backend container is running. So everything is set from the backend side. So at this point, we have completed Kafka. We have completed the database service. We have completed the backend service. The last top is to deploy the front-end service so that we can start interacting with the app. I'm going to deploy this my front-end app. Here's the command. So OC new app front-end is the name and I'm going to call my Docker image and we will expose it and we'll find it out to our app. So this should take some time. Our front-end service is up and running. Here's the part. It's running. We will now grab the route, the external route of the front-end service so that we can start browsing it. So here's the route, the front-end service. Let's grab the route to our front-end which is this one. I'm going to open a new tab and browse it. So this is our landing page of our front-end app running on top of OpenShift. So the way it goes like we need to provide some few keywords separate by commas. So I'm going to put something just to test it here. Amazon, Google, Microsoft. And before that, before we hit continue, let's switch to our back-end API. So back-end is not an even thing because we have not instructed it to do. So let's continue with these three tabs. So here this is the control center. So we'll go over these things step-by-step. So first of all, as per the graph here, as per the plan, we'll first start streaming from Twitter into Kafka. And from there, we'll move the data to MongoDB. And then from there, MongoDB will use island service to do some centenite analysis. So these are the five-six buttons. Do it. So first we'll enable Twitter to Kafka for those three keywords. So let's start it. Twitter to Kafka, there you go. Should take some time. And we should keep on monitoring our back-end. There you go. We are now, in real time, we are streaming tweets from the Twitter which is coming to our Kafka stream. This is slightly slower. I need to put some more logic to make it more frequent. So as you can see this, we are continuously hitting this. Okay, let's switch to our Kafka. That would be more interesting. So suddenly, okay, now, good. You can see the spikes, right? So we are fetching not much, but 10 messages per second. And if I do this last five minutes, and so you can see this clearly. So we have started fetching the data from Twitter into our system. 10 or 11 messages. This is not too much because it's a small cluster that we are playing against. But anyways, you got the idea, right? So we are fetching the data. The next step is to move the data from Kafka to MongoDB that we have. So let's hit on this button Kafka to MongoDB. It should start writing the data. As you can see this, it's fast because we are moving from local service to a local service. So it's writing data to the database. We'll do some chart rendering by clicking on this button. So you can see this. We are getting some tweets. At the moment it is redundant. It's not... I mean, we are re-reading the queue, so it is redundant. But anyways, you got the idea. We are fetching the tweets from in real time from the internet. We're moving the data to our... So I mean, this is under loop. So don't worry about this error message. This is under loop. That's why it's complaining that the execution is still going on. So just forget about that. But anyways, we are capturing it and we are reading it. So very less attraction on Microsoft. Amazon Google is doing very much higher. So the next step now is to pull the data from MongoDB and send this to the island service for analysis, for text analysis. So let's click on the third button which is sentiment analysis. So it should start doing sentiment analysis of the apps, of the tweets. It will connect to the island service and do some text processing. We should able to see... Look at this one. So like for example, look at this output. So we have calculated sentiment for 14 tweets and then I mean it's still going on. But look at this, the output from the island service. So here we have captured the keyword Microsoft and we have done the polarity test and the result is neutral for this specific app, for this tweet. The other one is keyword is Google and this is the tweet message. Silent Chrome experienced clashed thousands of browsers and IT admins. So the post is still neutral. I don't know why but it's still neutral. So this is service if you're hitting and getting it. And yeah, let's now do some sentiment analysis charting for these tweets. Okay. So the analysis is still going on but this is the sentiment analysis of the apps that we have three of these and it's positive, negative and neutral. It's not word proof right now but you got an idea, right? We are basically done with the demo at this point but what we have done is we have, in real time we have captured tweets from Twitter with our favorite keywords in real time into Kafka. We have then moved the data from Kafka topics to MongoDB and then from MongoDB we have used some external service. We can definitely replace this for our own business part but that's for another demo. We will then use chart.js to provision some charts in real time and make our user happy. And this is all based on top of a bunch of container storage. All right, guys. This is all we have for the demo today. Thank you so much and have a nice day.