 Jaya mentioned right this is going to be an extension of the same story line up what we did as part of the retail use case. This is the overall session which we are going to walk through right. So we'll be using Argo CD again it's an OpenShift GitOps open source tool is called Argo CD and we'll be using it to control and customize our Kubernetes cluster and applications right. We'll use the use case of MLOps in this case. So we'll actually use GitOps and understand how GitOps can be leveraged as part of MLOps. How many of you have heard the term GitOps? Right so it's it's actually a framework and it consists of a set of tools right. It's not a product whereas we have an OpenShift GitOps which is a product from Red Hat. That is the open source version it's called Argo CD. That's mostly for doing your continuous deployment. Again GitOps will have tools like your OpenShift Pipelines for example which is tecton. You can actually incorporate Jenkins as part of your GitOps framework as well right. Then there are a lot of other tools which comes in like for example Ansible. If you're doing your infrastructure as a code Ansible is part of your GitOps strategy then. So just don't think GitOps as a product. It's actually a framework and sort of tools which you use to adapt to the best practices where everything you do is through a single source of truth coming through your Git repository. Whether it's infrastructure, whether it's deployment of your applications or whatever you do it needs to be recorded. There should be a single source of truth. You go and update that specific repository and that kind of like triggers the whole thing right from the CICD point of view and it implements what you are supposed to implement as part of a single source of truth. So if you go and some someone wants to live right and they're like not happy with the way things have been in the organization, they might just delete stuff and go. What AlgoCity will do is it will reset those things back in. So you don't have to worry about any malfunctions being done by anyone. Including mistakes happen right in the infrastructure, production environments. When I was working long time back I had made a mistake of rebooting a production server. So it always happens. But what happens is the AlgoCity will take care of the continuous deployment part. So these mistakes are taken care of basically. You overcome these kind of mistakes or you are quickly back in business. So that's the advantage of the GitOps strategy. So everything is there as part of the source and so we'll see how actually those things will help you to overcome these kind of mistakes which are being made or some of the threats. I'll extend the story which I mentioned, which is more about the fictitious retail company, which is the Globex e-commerce company. And what we will actually see is, we'll see the GitOps ML of story line up from using that as a use case so that you get to correlate with the real life strategy in terms of how it works. We'll do sentiment and we actually see sentiment analysis. Jaya already showed you that. So that's something which we leverage using intelligent applications in this case. Then we use object detection. That's a real retail coupon application which we are going to use in this case. Now, and then we'll have a demonstration. But before that I want to show what exactly we are going to do. For example, I have this application hosted here. So you can see me right there in the screen. So I take a picture of this and what it does, it detects what I'm wearing. I'm wearing a t-shirt and it gives me a discount coupon on that. So that's the idea of doing an image detection, object detection. It actually detects what I'm wearing and then it actually offers a discount. Now, how this story plays around. Let me just go back to the presentation here. So as Jaya mentioned, we have sentiment analysis which comes in. So now for a specific set, product set in a clothing category, let's say the feedback is not good. So this is how the graph looks like. So the graph says I'm having less than 60% positive sentiment. Jaya showed the Grafana dashboard. So assuming that they show less than 60% sentiment here. So it doesn't look good from the product manager's point of view. So what they do is actually they want to incorporate a strategy. So the KPI for the product manager is to have more than 60% positive sentiment. There's more of the business term. So what they do is they will review which category is having less than 60% positive responses. The sentiment analysis shows that the request goes to a data scientist team to come up with a strategy where we say, okay, because we see there is a feedback which we get. Okay, and the feedback says that the product is costly. So let's say we introduce discounts, right? So the person whoever is coming on site or in store, they will just take the photograph. Let's say, for example, this is the code here. Don't scan it right now. Okay, I just want the demo to finish then I will allow you to scan that. So I just want to ensure that everything works and you get a proper understanding of what I'm trying to say before we get into that. So customer takes a picture here, goes there, and they the picture identifies that it's an object which is should be having a discount, offers a discount. So the customer is happy, right? So eventually what happens is the customer provides feedback and the feedback gets through meeting the KPI of that particular product manager, right? So we are having more than 60% outcome in terms of positive responses. So I'll happy from the product manager's point of view. Now, this is how it works, right? What goes behind? So we have a single sort of source of truth as I mentioned earlier. So that is going to be your Git repository. And of course, with infrastructure as a code, with the merge request which you do, right, the PRs and merge request, and the CICD all put together actually forms the DevOps strategy. Now, here we will use Argo CD. Argo CD actually defines your infrastructure in apps. It helps to ensure that it has a reconciled loop which keeps monitoring the deployment, right? So whenever there is a deviation from the original source of truth, it will go and actually fix that thing. So we will see that also. There's Tecton, which does the continuous integration. How many of you have heard of Tecton before today? Okay, right, better than the response I had last year. And this, so yeah, because it's trying, it's having a good traction, I think. And Jenkins, I see more than that. So I mean, if you truly have to fit into the Git ops strategy, right, have CI separate and the CD separate. Okay, that's what I try to recommend as part of the best practices to the customers I talk to. The developer should not be actually running and doing the deployments, or they should not have access or visibility, right? So what we do is in a CI pipeline, we just have the development access, the development project access. So the Tecton pipeline runs to that, we do the clone build and deploy the image in the development only. Then the best practice says that you should then raise a PR, okay, the product manager who wants to say, okay, I need to get this in my production, should go and approve it, provided all the best practices are in place. And then it through Argo CD, only Argo CD has visibility with the product moment. And the developers have no visibility with the product environment. So Argo CD will actually go and deploy your production image into your product environment. And that's how you have the new image with the new, whatever is that, like you have new model, you have new application source code, which gets into your production. So that's, that's how Argo CD helps to monitor the deployed application and fix any configuration deviations, right? So you see here, there's a pipeline, which we have used here. Does this work? Yeah. So we fetch the model, we basically generate a tag along with that, right? So these are the two parallel tasks which are running in my Tecton. Then, so the advantage again of Tecton IC compared to Jenkins, which I had worked on is you can actually do task based checks, like you can just run this specific task. For example, building the model by providing input and output, and then combine all the tasks into a single pipeline. So that's a very good advantage because I can actually test only tasks which I am associated with. So we have, we do a sanity check here. That's more of the machine learning MLOps, where I don't want to have the model should not be here in such a way that it offers more than 50% discount. So I want to make sure that the business logic is there in my pipeline. So I'll fail the sanity, failed pipeline, if it goes beyond 50 percent of discounting. Because it's a model, right? It may have a outlier which actually doesn't work well, right? In our scenario from the business point of view. Then we build a model, we tag that image, which is there, we set that image, and then we update customize. What we do is we actually update them deployment manifest. Okay, there's a deployment manifest, which is again a Git repository. All the ML files are there as part of that particular manifest. So that's what we go and update. That will go and actually deploy my new model or new application code into my development environment. After that, I'm comfortable. I can update prod, YAML manifest, and wait for a PR to be approved. Once that PR is approved, the production will have a new code available as part of the application deployment. So that's the pipeline which we actually follow here. There are some links which I provided here. If you want to go and read about Argo CD and Tecton, right? And moving this to the ML ops perspective, right? How many of you actually have worked on the machine learning tasks in your organization, right? Yeah, a few of you, right? Many actually. So you know, right? Model code is just a very small portion of making that application intelligent, right? It's just not the model part. It's the whole ecosystem which comes along with that, right? It's configuration, it's data collection, feature extraction, process management, then you are actually serving your infrastructure, then monitoring, right? We saw even to the morning when I was demonstrating the OpenShift AI, we saw different things, right? We saw the model serving inference as a separate section. We saw the pipeline, right? The Alira integrating with the OpenShift pipeline, right? Tecton. And it was automatically able to go and create the Tecton pipeline for you to run and get your model into your model serving inference scenario. Then monitoring data verification. So there are so many things which comes in, right? As part of the GitOps framework. So all these things can actually leverage a GitOps framework for whatever you are actually doing is part of, for example, in this case, we are doing object detection, right? This was a research done by Scully. It was adopted from Scully. They can actually say that this is the whole scenario after studying a lot of customers doing the way they are actually building their models and the intelligent applications. So what we, what generally the customers will do is they will have this set of things which is AML data in Kafka, which is, again, making your intelligent, powerful intelligent apps which Jaya demonstrated, right? Then we have even given architecture with Knative and Cloud Events. You saw this again. Then we have this DevOps, which has an infrastructure as a code, merge request and the CI CD, right? All these three things put together, you get DevOps. And with the model lifecycle, you actually get ML ops as part of this. Now, what products we have used here, right? These are all the product sets. Everything has an open source version to it. Some of them are not part of Redact. For example, we use Inflex DB Timesuse database, Grafana dashboard. Then actually we had written some of our own source code also. So you can actually go and check. Like I created a connector to move the data from Kafka to Inflex DB because we had a recursive JSON to actually parse the data. So that was not available. So I had to write a Python code for that as well. So yeah, there was many things which was built and the link with Jaya mentioned, right? The sentiment analysis pattern, right? You actually should go and check that. You can just deploy it in your environment as well. You will get the sentiment analysis to understand how the overall components are working in that, right? And this is the whole build and deploy intelligent application with MLops. So we start. So here is your data scientist. As I mentioned, I showed you, right, the Jupyter Hub notebook in the morning session today. I will show it today again. So you go and update here, right? Now, what it does is, and then push your code into your Git repository. It will actually go and trigger the whole pipeline for you. Like the pipeline which I earlier showed. So you can do this, clone, do checks, then build, image scan, tag. It will actually push the image to a registry, for example. In our case, we use local registry. We can always use square registry or any other things like JFrog Artifactory, for example. It gets deployed in production. Now, here, if you see this particular loop, right? This is the manifest repo which goes into Argo CD. Argo CD actually does a reconcile loop here, right? And this particular reconcile loop will always keep monitoring your prod environment, okay? And whenever there is a push to the manifest repository and being merged, right? The reconcile loop will trigger and actually update your prod in this case to actually work with the new image tag or whatever you have done as part of the manifest change. Now, if you see this, this is the wrapper which actually walks through that, the retail app. It has your Argo CD and the OpenShift Tecton pipelines. If you want to go, you can actually go and check the source code there. And again, this is also around the same thing. So everything is open source, actually. It's for you to go and browse and if you want, you can leverage in your environment, right? Okay. So what does our app do? As I mentioned, we take a picture, right, of the object and it actually gives you a discount and that's the object detection model which we are using here. I think we use a single shot multi-product detection algorithm in this case. So let me quickly, let me actually show you the demo. That will be even better. So if you see here, okay, so this is our project. This is a dev project and then we have a different project called, this is a prod project here. I'm having two different projects here running. So what I'm doing is this is the dev project. If I go and select this, it will actually go in here and say, okay, and then if I select here, it actually runs a model in the back end and detects a t-shirt and then offers a discount, right? So this is the model, this is how the source code actually runs here. So I have this, this is my predict function which I'm actually calling there. Let me go here. So this is a Jupyter hub which I actually started from my OpenShift AI, right? And this is the OpenImage, I'm using SSD. This is my model which I'm using. Then I'm actually using TensorFlow. I'm saving this model here as into this folder and what I do is I have this particular code here. Now here I have discounts being offered right here. So I'm using the Monday CSV data. So this will actually be read. I create models out of that and that is what goes into my pipeline and then it gets loaded into my dev and then into prod. Now what I do is let's say I use a Tuesday data set here, right? For example, now I need to actually run my model. So I'll say, okay, just run this, run all cells basically. So it will go and run, it will go and check the Tuesday data set. Now if you see the Tuesday data set, there are some discounts which are going beyond my specific business requirement, right? So when it gets to the pipeline, actually this will fail, the check and it will not move forward. Then we will change this and then it should I think work. So once this completes then I can just commit this. I have a local Git repository. It's just mimicking the Git hub into my local cluster. We call it Git. It's operated written by my colleague and so we use that in most of our labs. So here let's say I'm just pushing this code. The moment I push this code, right? And I go here. There's going to be a pipeline run which happens here. See? There's a pipeline which automatically gets triggered in. This is my pipeline run. While it's running, I'll show you the one which got executed previously when I was doing a test. So this one actually does a sanity check. Okay. And what it does is it says that it does a basic test and it says okay, I am okay with the discounts being offered here, right? So it allows the pipeline to complete, actually. Now you see here I am then having I'm building a model image here. So I take the models and then I build an image out of this. These are all the tasks which are running in them. Then I am setting this image in my dev environment. I once I do the setting I update. So I'm using customize. Anyone heard of customize starting with K? Right? Yeah. So there is one strategy to use customize. Again, it's a GitOps tool basically. There's another strategy to help charts, to use help charts, right? So you can use either of that or all of that, basically, depending on your requirements, right? I like customize but the help installation becomes very easy, actually. So I'd use customize in this case. So I will use customize. Basically, it's a first-class citizen sitting with OpenShift. So I can just say oc-k and provide my customize.yaml file. It will just go and deploy whatever is there in that. Okay. Then I deploy the model in my dev environment. Okay. And then the argocity kicks in. It will actually push everything in the dev. Then I update it in my prod, yaml manifest. And that will actually again go to argocity and argocity will go and deploy it in prod. So I'm not doing any PR specific thing yet in this case. Okay. But the idea is that once I push it into customize as a manifest for prod, there should be a PR initiated. Someone approves the PR in the Git or Git repo and then that should actually go and tell argocity, okay, I have a new single source of truth for this specific deployment. Go and pick that up. So argocity will reconcile that and then deploy the image you have. Right. So okay. Let's see what failed, right in this case. Okay. It failed as I need to check. So the the, it actually say the discount is almost way off, right? Sixteen percent. I had a pretty good, okay. Clothing confidence was not so good, but it still had pretty high discount. So we are just checking this and if it is very high discount, we fail it. Now what we do is we actually go and check. So this here, as a data scientist, what you're trying to do is just push your code here and everything is taken care of you, right? Care for you. So I'll go here. I'll update that to a new dataset, which is like witness to dataset. And then I will say, okay, run all the cells. Okay. So here I am having pretty good, like whatever discounts I wanted, something similar dataset here. So I'll see, okay, is this executed fine for me? Right. Then I will push this back as a data scientist. I'll just say, okay, run all the cells. So this will come with a new code for me because the new models have been updated. Again, this will go and actually run a new pipeline for me. So this is my new pipeline, which is getting executed now. This is my pipeline run, right? So I think this will pass through because I am pretty confident that the discounts are not so high in this case. So this was about the CI part, right? We saw the continuous integration in this case. We haven't got into the, let me also show you the continuous deployment, right? In terms of how, so there are two things. One is that it synchronizes whatever is there in the manifest. So whenever I update the manifest, once the pipeline passes, the manifest gets updated. So we can sync either in the August city or we can have an auto sync there. So depending on what is your requirement for your organization, you can set those parameters in August city and that will deploy the new image with a new tag. So do you guys tag your images? Or you just say latest. What is your strategy? Tag, right? Okay, good, yeah. Always tag at latest is not always greatest. You know, Prakr used to say that. There is one of my colleague in Sydney. So we actually had this if we use Agnostic D, so whatever you see the labs, right, today, a few of them have been deployed and the source code is actually available for you to go and see. So we use all Ansible playbooks right from deploying the infrastructure to deploying all the applications. So you can actually go and search for Agnostic D. Agnostic D is where you get all the source codes. So even this particular lab which I have the sentiment analysis and ML Ops, you'll find that all the source code there, all the workloads are role-based workloads. So we use Ansible roles to create those workloads. And we use because we are into the demo thing, right? So we have to start the demo and kill the demo. Again, we have to start. So we just can't create building demos every time, right? We have to have an automated way of doing that. So we use Agnostic D. That's our open source way of actually creating the labs. I think some of the source code you can see when you get the slides, there are links available for you to go and access those roles through Agnostic D, right? So right now see this is my Argo CD and this is my Dev prod, Dev application running here. Currently everything is like fine from the development point of view. Let me reduce this and everything is fine from the prod point of view as well, right? This is my prod environment. So you can see in Agnostic D, it will actually show you all the resources associated with that project, okay? And it matches with your Git repository. Whatever you have set in your Git repository manifest, it will map to that. So if there is any deviation, it will actually show you a deviation here, right? So if I see, so it points to the manifest and this is my manifest in the repo. So if you see here, it actually shows me if this is my live manifest which is there on my cluster, OpenShift cluster and this is my desired manifest which is there on in my single source of repository. If I do a diff, there is no difference right now because everything is green, right? Everything is fine and synced up. The moment I get a new image tag, right? The moment I get a new image tag, I can sync it here. So before that, we can see the differences. So for example, let's say I go here into my Roards deployment. Let's say I want to I change this deployment. I am running with only one pod here right now. I say I go and change this to whatever is there, right? So it will actually go and trigger those pods for me, right? Here it's creating containers now. Now if I go and see here it says out of sync. This is out of sync basically. If I see the app diff, you see here it actually shows me there is a fire replica which is there on my live system currently and one is it's supposed to have one, right? So if I do a sync it will actually go and set this thing back for me. See? So the moment I go here this is what I had updated and actually it moves back to one, right? So that's the way of ensuring that Argosity is able to monitor and any deviations it will not allow, right? Whatever is there in your manifest, it will stick to that. Now let's say how do you update this, right? So this is my repo here. This is the repository which I use. So this is my RetailDevGitOps repository basically. Let's say this is the deployment which I was trying to increase it but it did not allow, right? There's a replica one here, right? In this case for example, see? So what I'll do is I'll just go update directly here. Of course I should use VS Code and all the best practices which we were taught today. But in this case I'm just skipping it for now. So I'm saying, okay, I want to have three replicas in this case and I'll say commit changes without any comments there. Right, so what this will actually do is this will again trigger if I do a sync, right? It will again trigger saying that, oh, my manifest is different. Okay, and it will try and sync it. Okay, I think it synced pretty quickly. And here you see there are three ports now for me. So that's how you go update your manifest and that manifest will actually ensure that it's able to move back to whatever your manifest says into your actual cluster. Okay, let's see what the pipeline shows, right? There is our pipelines. Pipeline runs. This is the pipeline which we executed. It actually went through and it came up well, actually. Now what this will do is actually it will go and trigger I think the Argoset is already kicked in because we just reset it and synchronized it. So we are all... So let's say let me show the tasks here. So this is where we are building the model. This is where we failed, right? The sanity check. And in this case, okay, we are not having very high discounts so it actually completed. Once it finished, it actually built the image for the model. So it actually incorporated all the models in and then built the image for that, which is being used by the application to actually go and do object detection, okay, and the discounting part. Both are... Both things are being done by that specific... those specific models which are there as part of this image. So there are multiple ways to do... to serve the models, right? One of this is this model serving. The other one which we saw today morning, right? Okay, where was that? I'll show you this. This is where we actually can do the model serving and the inference as well, right? So these are the projects in my OpenShift AI. See here, these are the models being deployed here as part of my model serving. So there are four models deployed in the morning case which we discussed, right? So either you can do model serving using this methodology or you can actually build those models, bake it in directly into your applications, right? And can use it that through the MLOps process as well. Right. Then we actually do the tag. We tag the image for the development. So we actually generated a tag earlier. So we tag our dev environment, the dev image here. We set that image in the dev, okay, in the deployment part. We then update the customize repo. So if we go here to the repository, there is an update which was happened three minutes back, right? Let's see what was the update. So we actually went and updated the, okay, this was the one which we changed, right? The three part. Even before that we had the tag being updated. So I think it went off before that only. Then there was this, again it also went and updated the prod after the pipeline completed and what it actually did is it updated the prod tag, right? Deployment patches. This is where we actually go and update the tag. So there is this today's tag here which it actually went and updated. And that's the manifest which the Argosity picked up and pushed this new image into the prod environment. Now in this, even if I actually go and delete this whole deployment, right? And actually we'll kick in again. Let's say for example I delete this whole deployment. The rest. So I actually deleted the deployment. It's not there, right? And the moment Argosity sees that, it will kind of like sing this. It will actually move those things back in here. Right? So that's the advantage you get using your continuous deployment Argosity. So killing the complete project also, the project will be back in, okay, in a moment of time. So, yeah, that's all I actually had to show you. And, yeah, now you can actually do this. Let's go and check the scan code. Where is the scan code here? Okay. You guys want to try the scan code? And see, just ensure that you turn your phone to your selfie mode, right? The camera so that you can take photo of yourself with your camera. So it will actually detect let me know if it is working. Yeah, the one which I showed in that particular thing was yeah, but in this case we are just fixing it to the clothing category. Okay, so it will only detect whether it's a t-shirt or not. Okay, that's how we have trained the module. Okay, cool. I think it worked, right? For example, this data verification will be part of your data engineering part, right? So the data engineer will actually be doing your data cleansing data verification. Okay, so the data engineer has to create those particular codes in to actually do that specific aspect. Now, that can be part of a Jupyter Hub and you can incorporate that as part of your pipeline strategy. Can you give a mic? Actually, I can't hear him. Okay, so I was not able to hear him. You are saying that data is in YAML format. Okay, so you actually find the data verification is part of your machine learning aspect. Okay, it doesn't... So YAML, what we were discussing was more of the manifest. I think it's done with the question. Yeah, so it's more with the manifest which you use for your deployment. Okay, so that is all running in YAML. So that is different than the data verification part. Data verification is more about how the data engineers want to interpret and understand and verify that specific data for your processing, for the processing, basically. So all Ansible roles being created here with tasks and then you have this workload at YAML. Right, so that is how we actually... So you can go to Ansible, IgnosticD, and then just check how it's... There are vars defined for all the roles, basically. So you can use that for your deployments. So what we have seen, we have used August city and GitHub site to control and customize the Kubernetes clusters and applications. Running your apps on the Kubernetes cluster we use GitOps to make changes to the apps, to the ML models, and to the infrastructure deployment. We actually used tools like Tecton pipeline, OpenShift GitOps, which is Argosity, Customize, and some of the models which we used. And then we saw how leveraging Tecton and Argosity can be used to avoid some of the costly mistakes in an enterprise segment. So the cost of making a change to your app becomes negligible by using these kind of GitOps strategy and GitOps methods. So how many of you actually use GitOps strategy in your organization? Fantastic. So nice to see that. What other questions do you have before we end the day? And be the traffic, I think, or maybe? Every time you will actually get some of the speakers talking about traffic, I guess. I think I'm based out of Mumbai, so it's the same for me now. With all the metros being great. Right. So any questions? No questions. Thank you all for joining in.