 We are going to go now into GitOps, CICT and MLOps with Ritesha. Okay, so I hope you are all awake now to get through the GitOps journey. How many of you have heard about GitOps as a term, okay, few of you, okay, okay, cool. And how many of you are actually doing development, most of you, great, yeah, you guys will be more concerned about the inner loop than the outer loop, right? But you have to be concerned about the outer loop as well, I will explain you why, right? So the topic we are going to talk about is GitOps, CICT and MLOps. How many of you are actually working on machine learning here? Anyone? Oh, quite a few, great, okay, cool. And how many of you are data scientists, okay, so we have some data scientists as well here, okay. So you got some understanding of inner loop, outer loop, I will try and do a bit more clarification about that as well. We are going to use Jupyter Hub here. Anyone heard of VS Code, right? How many of you work on VS Code? I think that was the question you never asked. Everyone, right? Okay. And Jupyter Hub? Okay, most of the machine learning guys, right, okay, sorry, Jupyter Lab, okay. So here we are going to use Jupyter Hub which is like a software which we use and which gives you the same functionality as Jupyter Lab. So we will see that as well. Now GitOps, okay, so we will see how it all comes together in this presentation. I am Ritesha, I am a principal architect, part of the portfolio technology team working on cloud technologies and based out of Mumbai. That's my email ID if anyone wants to get in touch with me. I don't have Twitter handle. I only have LinkedIn account, that's it, from the social network point of view. So this is the only touch point for me, okay. So this is what we are going to cover here. I think I have some more time left. So we'll go through a demonstration as well at the end. I'll demonstrate how everything comes together from the GitOps point of view. So we'll do what is GitOps framework, related developer tools. There will be too much for you to absorb but I have provided links in the presentation so you can go back and kind of like download and go through all those links on your own. This I think there is a recording as well. We can go through the recording as well. We'll talk about Open Data Hub. So Redact, I think we are having a managed services on Redact Open Data Science based on Open Data Hub. Anyone working on the Open Data Hub upstream? Okay, cool. So that's the new thing which you'll learn here then. And then we'll talk about the use case, object detection. It's a very new use case like not fully developed but you get an idea in terms of how the whole framework works for the machine learning operations. So we'll do a retail coupon application and then we'll do a demonstration around that as well, right? Okay, so let's dive into overview. So what is GitOps framework? It's basically like earlier developers used to have a fantastic process in terms of collaboration and security, overall CI, continuous integration and how everything comes together in terms of the delivery part of the application. So the same thing is getting into the intraspace as well where infrastructure as a code comes into perspective here and you put everything onto your Git repository or a single source of proof. So it may not be Git. It can be any other SVM or any other source code repository which you can use. And you push your code there, which kind of like changes your application deployment or infrastructure or anything which you want to do it in production. You just push it through your source code. You don't actually go and do anything in your production environment, right? So that's the whole idea about how GitOps works. Now, if you see, there's a formula here, infrastructure as a code. You can do a merge request and then the continuous integration and continuous deployment, right? So that whole thing put together gives you GitOps DevOps framework, which kind of like works for the MLOps as well. So more or less it's similar. I think only way the models are being like contained and then published, right? That's something which we'll see here as well. So there's a single source of truth. That's what we always want here. For you to truly implement GitOps, you need to have multiple repositories. Like one is your manifest which you have, right? In your cluster, you need to have a repository for a manifest. Then you need to have a machine learning source code repository. And then you need to also have a major repository, right? Like Nexus or JFrog artifact or anything. So we use GitHub here as a source of truth. We can use anything like that. And from the image registry point of view, we have OpenShift internal registry, right? So we use that for this particular example, but you can use any registry which is outside of your, sitting outside of your OpenShift container platform and kind of like talk to that particular repository through your cluster. So there are some, so if you see CI and CD, right? We split continuous integration, continuous deployment. The image you see here, we kind of like fetch the model. I'm just talking from the machine learning point of view, but you can also think from the normal application point of view as well here. In terms of the way the pipeline works. So you fetch the model, you then basically do a sanity check on that particular model whether it is actually, for example, in our app, I'm going more than very high discount, right? My model is actually doing very high discount, then it's a business loss for me, right? So I need to have a check there so that if the discount goes very high, then I kind of like stop that whole pipeline and that code doesn't get into production, right? Build model, then I set the image into my, and then update the development environment once the development environment I'm comfortable. I will then initiate either a merge request or a PR, okay? Where I say, okay, now you go and update the production, right? Then someone will actually go review, everything is fine and there and then kind of like submit, approve the request that kind of like goes and gets your image head into production environment, right? If it's serverless, you can split the load, you can fill it in traffic between older and the newer version. Purely depends on how we want to architect your rollout into production. Okay, so this is the CI part, continuous integration part I mentioned. Then once this does, and it finishes the dev part, then comes the continuous deployment, right? Once the PR is approved, the image has to be continuously deployed in your production environment, right? So how does that work? That works through a tool called Argo CD. Anyone heard of Argo CD before? All right, okay. So I have provided links to all the information here. So there is a Argo CD link, then the CI part which we do, right? How many of you have worked on Jenkins? Yeah, quite a few, right? And Tecton, started, not started? Planning to start? Okay, so I mean, Tecton is a, if you want to truly leverage the Github's power, right? Tecton is a way to go forward, especially with the kind of, if you see the upstream, anyone worked on Jenkins upstream by the way, all right? So a lot of development have a kind of like shifted to Tecton, in terms of adding new features into it. Like Jenkins, if you have to actually update a task or create something inside, in terms of this pipeline stage, right? You have to start everything from the first task, let it finish till the last, right? So a lot of developers time is actually start just waiting to figure out whether it will clear or not, right? Complete the process or not as part of the integration. Whereas, Tecton, you can actually test each and every step, each and every task separately. And once you are comfortable with all the tasks, you kind of like merge it into a single pipeline and execute that. So that's the power of Tecton. And of course, it's again a declarative, like Kubernetes, we all know, right? It's a declarative platform, right? Everything you do in Kubernetes is declarative. You write in YAML, you can execute in YAML, you can get an output in YAML, push it back into YAML, right? So it's all about YAMLification. And Tecton, again, we use YAML. It's a declarative. Algocity is declarative. GitOps is declarative, right? So everything kind of like gets into the GitOps perspective and truly gives you a transformation journey into GitOps if you want to get your environment or your customers environments to be adapted to the GitOps framework. So here, Algocity deploys the application. We define the infrastructure and application through Algocity. We can manage deployed applications through Algocity. So Algocity doesn't just push the image into production, right? So you have a set of manifest that this is my desired state in production and you have it implemented by Algocity. What Algocity does is actually it has a reconcile loop so it will go and continuously monitor. Sorry, I'm not able to see field the guys in the back because of the light. So it's kind of like little editing, but I'm able to manage it. So yeah, Algocity also continuously does monitoring of what is deployed in your production. So let's say if you delete your pod itself or your application, Algocity knows that that has to be there. So it will immediately sink it back or depending on what time frame you have. So it continuously monitors and ensures that whatever is your desired state will always be there as the current state in your cluster. So that's the power of Algocity for you. Okay, Algocity manages, it keeps monitoring the deployed part that's taken care of then you have Tecton which is the continuous integration which builds the container image for your model in this case or for image for application if you're using any other application. So you saw right through OpenShift console you can actually create, you can just get the gate repo, point the gate repo and it will and build image it will automatically detect what image you need to build. Then there is a button where you actually also can say create a pipeline. So it will actually do a clone, build and push pipeline for you automatically on the fly in the console, right? And so here what we are doing is we are deploying it in development environment. So the CI part only works, the developer touches only the dev part here, okay in terms of the environment as well. The prod part is not touched by the developer or the development environment at all, okay. Then it gets pushed into another repository, someone approves it, and then the Argosity kicks in that it will go and actually do a deployment on your production. So you're kind of like creating all these things separately, very secure environment, very secure from the overall DevSecOps off-story point of view, right? So that again is a good point to think of, right? And the other thing we are going to leverage here is OpenShift data science. If you go to Redats, Manus services offering, console.redat.com I think, you get to see, you actually can create a Jupyter, you get a Jupyter notebook there as part of the data science offering, or you can actually get any other like OpenVINO for Intel, if anyone is working on Intel or NVIDIA as well, right? We have NVIDIA operator as well now in OpenShift. So you can actually leverage GPU power as well when you're actually doing your machine learning product development, the application development, right? So, yeah. So OpenData Hub is basically it provides a set of tools, the air tools for running large and distributed air workloads on your OCP platform. So there is Jupyter notebook. It's like where you actually go do your development and also execute and test it out. I kind of like the Jupyter Hub notebook because you can write the comments and then you can kind of like have a very seamless way of writing comments, execution unit, then the output is there, then go to the next stage, right? From the developer's point of view, right? So everything is well documented, right? And so I like the way it's structured. When you install, let's say OpenVINO operator and then have the, that particular Jupyter hub based on OpenVINO, you will find a lot of examples which are already built into that particular environment. So you can just have you open your notebook with a specific example and quickly get into what you want to develop from your machine learning part of it. So, yeah, data science pipeline, there's model mesh serving. Yeah, I think most of the things are integrated into our data science. Exactly, I'm not sure where is it going to be G8 or whether it's G8. So I need to check that. We read that OpenShell data science. But that is based on this upstream OpenVINO data hub. So if you are really into data science or machine learning, this is something which you should definitely explore in your environment, going forward. Okay, so there's a link to OpenDataHub.io as well. And of course, this is true for most of the applications, like you have this red piece, which is an application, right? And it may be an intelligent application or it may be some other application. But that's a small piece, right? In the whole gamut of things which makes a true enterprise run, it's business, right? So what you have here, right? You have configurations, data collection, then there's feature extraction, right? There's a plan to have feature extraction bank also, right? And then you can leverage that feature extraction to pull and then get incorporated into your machine learning code as well. So that's a new thing, even I need to check that what exactly happens that. Your data verification, then there's machine resource management, analysis, process management, serving infrastructure monitoring, right? So many things to get your model running, right? For your business requirements. So this was adopted from Scully, way back, I think in 2015, right? And this is what our app does. This is a hypothetical app, right? So you go to a retail shop, you kind of like scan your mobile to get your, understand what is the discount on that specific product, right? So let's say you pick up a shirt, you take a picture of that, the model will kind of like detect that it's a shirt and then give you a discount coupon on that, right? And then you go and just finish your payment. So that's what we are talking about here, in this particular use case. There's a, yeah, link to the repository of this particular application. So we'll walk through a few of these things depending on how much time we have, right? And what questions you have. So how the app is deployed in the OpenShift? Yeah, this talks about how it's going, it's implemented, right? Is it visible back there? Okay. Yeah, so we have the single source of truth here, the GitHub, right? In our environment, we kind of like a clone using a, the GitHub is actually cloned in our environment, we call it GTI, it's a small operator which my colleague has written, to kind of like have the replica of GitHub into GTI. So my cluster environment will have my own repository cloned with whatever I want to use. That kind of like goes, and then I have my application editor here, it can be either VS Code or Jupyter Hub or any other ID you want to use, right? So the developer will go and update the Jupyter Hub, it kind of goes and push there in the GTI. Now there is a GitOps, August CD, which kind of checks, okay, there is a change in my GTI. So I need to reconcile now, reconcile now. So it will do analyze, observe, act. It will check whether the production matches with my GTI source code. Every so, it will continue just checking, else it will actually go and update it, right? Depending on what logic you have set. So for August CD, it will only check whether whatever is there in the repository is there or not, right? In terms of my production and their environment, depending on how we have configured your August CD for the continuous deployment part. And then here in this environment, I have OpenShift to configure here, right? OpenShift pipeline, that's the tecton part. And August CD, we call it OpenShift GitOps from the OpenShift point of view, right? So the OpenShift pipeline does the CIE part, which you saw earlier as well. There's a small pipeline listed here as well, you know? So the OpenShift basically, so once you update the GTI, there is an update your source code, the OpenShift pipeline gets triggered, okay? There's a trigger set in GitHub or in GTI. The OpenShift pipeline will get executed. Once it finishes the execution, everything is fine, right? It kind of like goes back and updates your manifest repository, which tells that, okay, I have a new updated image tag now, okay? And once that new image tag is available, again the OpenShift GitOps, August CD will say, oh, I have a new tag now, okay? In my repository. So there is a change. Does it match my production? No, it will go and actually update that. So that's how August CD will work and do a reconcilation. So it actually has a reconcilation. Following me on this? Any questions, Vapur? Yeah, like I understood, like first the CI has to happen from GitHub, GTI to like these pipelines, everything integration will happen. Then it will go back to GTI and like August CD will trigger, right? Then once it has to deploy, like wanted to understand this analyze, objective, and act to this pipeline and how it's connected to this. Yeah, so you have image in the image repository, right? So you are mapping in August CD in such a way that you say, okay, if I have a new tag. So what I do is this image will have a old tag in production, right? So I will say, okay, I had 1.0. The new tag, I will get a new tag name 1.1 in my GTI repository. So August CD will say, oh, I had 1.0 here but I wanted to have 1.1, right? So it will say that there is a difference and it should go and update that. Yeah, right, like one more use case, right? If I want to manage different versions, like like how it supports. Different versions of what? Versions of an app. Of app, yeah. Yeah, you can do that having different applications, different, so one is you can do serverless, where you can have different version and here you can have different namespace for different versions as well, right? Yeah, it works with any, these are all open source tools working on Kubernetes. Yeah, OpenShift is again a Kubernetes platform, right? Yeah, I would love you to actually use OpenShift, right? No, actually you were talking about Tectons, right? I use Jenkins. So in Jenkins, all the pipelines are actually centralized, step-ly. But Tectons are not centralized step-ly, it is like direct overflow of the pipeline. So what advantage, I didn't understand the advantage of Tectons over Jenkins because I personally prefer Jenkins. Yeah, everyone will prefer the old way till you get the benefits of the new way, right? No, I didn't understand the benefit because in Jenkins, we can do it step-ly because we can see the pipeline running through the process and see the step centralization. But in Tecton, there is no centralization. If you want to implement the pipeline, I can put this whole pipeline into my GitOps. Okay. But Jenkins, you cannot put the whole Jenkins into GitOps. So if I want to deploy pipeline and the pipeline-specific... So this whole pipeline, right, I can create through GitOps and I can manage through GitOps through Agustin. For that only, we are using CLIs, right? So if you use Jenkins, you will not get the true GitOps way of pushing even your pipeline-specific core into your Git repository and then the whole thing gets triggered for you. Okay, just one more question. So I use Jenkins regularly for production and everything. So I don't say Jenkins is bad, right? No, no, no, no. No, no, I know it is different actually, I'm using it because Tecton is a new topic for me. I just recently came through it and I just researched about it, but I was not so satisfied because I have some compiling errors with it because we can't migrate suddenly from Jenkins, right? So for me, the pipelines are the issues. So in Jenkins, we can alter the pipeline's flow. But in Tecton, can we do that? Yeah, yeah, you can alter the pipeline. We can do that? Yeah. So if you want to have a sub-pipeline, you can have sub-pipelines also in it. And you can alter the flow. Okay. So sub-pipelines, you'll have to do it. Right now, the logic is not there to alter. So you have to do a sub-pipeline, then do a Venn clause and then alter it. Okay, thank you. So if you want to actually see how this whole thing comes in, right? There is a retail there of GitOps link here that gives you the source code for the Argo CD and the pipelines, how it is done, right? So you can go to that, that's my repository. And yeah, there is, I think one repository, this one in terms of how the application source code is. If anyone is interested to see that, right? So this is kind of like my application. Yeah, okay. So everyone can see me right here. So if I select this, it detects that I'm wearing a shirt, right? Yeah. Yeah. T-shirt, right? It gives me a 14% it's for my T-shirt, right? The off is for my T-shirt, 14% right? Okay. So this is the application. Everyone got what exactly we are talking about in terms of the application use case here? So you're walking in a retail shop, you get to know, okay, you can identify an object, understand what the discount you get and that's it, right? Sorry. Right. Okay, bye. So now getting the technicalities. So when on the OpenShift container platform, you can install, when you go to the cloud services of Red Hat, managed services offering, we have Red Hat OpenShift Data Science. When you go there, you will actually get this, there are notebook images. So you have the standard data science image, which has Python version 308. There is a CUDA image, which you have, there's a PyTorque, there's a TensorFlow. And then there are a lot of partners who actually provide their images, right? Like OpenVINO from Intel, or even from NVIDIA. So those are the images which you can use here. It will show up here as part of your notebook. So you select that, you select your container size you want here. So you can select what kind of size you want, like 2 CPU, 6C, depending on what kind of development you're doing, right? So you just select that and kind of like start the environment, okay? So this will be using minimal Python, right? So let this continue and we'll go to the next step here. So once it's up, right? I have used a standard image here. So, and then I had cloned my Git repo here, okay? Whatever the Git repo I had from the source code point of view, which I showed earlier. So that's kind of like cloned here. Now, right? So this is my main program here. So I'm using a CSV file. So let's see what do we have in that discount data set, right? So this is a CSV file I'm using out there in the model. So I changed this to let's say, Wednesday, right? So as a developer, I am working on this, right? And this Jupyter Hub notebook here, I go through and run step-by-step here, including documentation, right? Then here, basically I'm actually creating my pkl files here using dump and then I will basically use them as models. I dump it in a folder here. So once I do this and let's say I restart my kernel and run all cells. So this will kind of like run the whole program here, the complete thing, it will generate the models. And then if I go to this folder, so I have these things generated seconds ago now, but what was earlier, so I have the new model available and like this is the inner loop we are talking about right now. Everyone got what is inner loop now? Yeah, now once we push this, right? So I have this git here, I will just run this and then restart the kernel. So the moment I push this, it will, it should go and kind of like update the git repository which has a trigger set in to trigger my pipeline here like this, okay? So what this is doing is actually building my model into an image, like just layering, using a ubi and then layering model, right? And then straight away you get this in dev. Once you have this in dev, then the Argo city kicks in saying, oh there is a change in the tag, I will go and update my product, right? So that's how the whole thing works. So what are the stages here? Let's see a little bit about the pipeline here. Okay, so let me show you the stages first, yeah. So these are kind of like stages here. So I fetch my model, generate tag, then there is a sanity check which I do, then I build my model, I do a tag image, I do a set image in dev, then I update my customers, a wrapper for the development environment saying that there is a new image. So it will actually go and update my deployed in dev, then I will update the customized repo in the production. Don't worry about customers, I'll explain you in a minute what is customized. There's one more tool we have to talk about. So this will go and update the repo, customized repo for my production, and then the Argo city will say, oh there is a change, I need to go and push the new image into production. So it kind of like says it's triggered by, it says it's actually triggered by, there is an event listener in my pipeline which kind of like is listening to the trigger. So whenever there is an update to the GTI, there is a trigger which gets to my pipeline on this particular listener. Then these are the tasks, so I can do individual tasks and test it out. There's this trigger which I have here in my pipeline. So this is all about pipeline. Now earlier you saw you as a developer like updated your model using Jupyter Hub. And then this as an admin, you can actually go and check, even as a developer you get a read access to check how is the pipeline progressing. If there's an error, you can go and fix it in your Jupyter Hub and push the code again, right? Till that time anyway, there's nothing going on in production for you. This is all your development environment, your continuous integration happening, right? Nothing going into prod. So now it's building model, it will do a maven build here. So it will download the whole internet before it kind of like builds. That's how maven works, right? Okay, so this is about the pipeline. Let this keep running. I have these workloads, there are pods here running, right? Now I have this kitty here. So this is my repository, which kind of like cloned into my local repository here. So this is the repository I was talking about here. We have this base. Okay, customize, right? Before that. So anyone heard about Helm charts? Okay, a lot of you. How about customize? Oh, a few of you, okay. So I mean Helm is where we actually do packaging of all your application manifest, right? I mean we say application package, but ideally it's all manifest in Kubernetes environment, right? So we kind of like package together and use Helm. So we have our Helm charts, we have our values which we need to push into along with the Helm chart and then we deploy it on the environment where we want to deploy the application. So another way is customize. So let's say you have this dev test prod environment, right? And then you will have multiple of your departments in your organization. So you'll have your HR department, your some data science department will be there, IT department will be there, right? So all these departments may be they want to actually share the same source code in terms of the manifest and all, but do slight modification for their own requirement, right? So for example, a dev environment will be there, you just need to CPUs, but your prod you may actually need like less than 10 or 12 CPUs, right? So those binary changes, but using the same manifest, we can use customize to actually do that thing as well. So here I have my base image. So I have my base manifest here. So this has, so it actually has the source code where I can go and create my pipeline as well. So don't worry about this. So basically I am having this task a stretch model repo. So you see the first pipeline task or step which you saw earlier, right? So that was fetching the repository. So that's the first task I have. So it says cluster task, which means that it's already part of the OpenShift cluster as a cluster task given to you. You just need to use that. You don't have to create your own task, but there are a few which we have created as our own. So we name it as kind task here. So the generate tag, I have a task which I created here. So it's not a cluster task, but it's a task which I have created. So you can see how it's created in the repository as well. Then we do a sanity check here. There's a script which I call here. Of course that script was written by a data scientist. Then I build my model, I tag the image for the dev. So all these things which you saw, right? In the pipeline, the CI continuous integration, that's what we are showing here. Then finally we update the customized repo for dev. Once it's done, there is a new model which kicks in, right? So the Argo CD will go and roll this up. So here we roll it out using OC rollout command. Then we update the prod and then the Argo CD comes in. So this is about pipeline, and then we move into Argo CD. So let's say I have my, oh, let me show you the console again. I have my deployment, right? So this is the one which I am managing through Argo CD, this particular application. So I go here and let's say I have, I am scaling up, right? So whatever I want to, I don't care about the resources. Just one finder, I thought I want to blow out the cluster. So I kind of like keep updating it and pushing it to the limits. Now what happens is that the Argo CD will actually detect that there is in the object detection rest, there is a change, right? And there are multiple pods which are coming up now. They're like, as per the replica which I have set here. But Argo CD says no, it should not be the case, right? Because as per my knowledge and the GitHub repository which I am monitoring, you should only have one replica there. But you're having more than one replica in terms of your pods, you have increased, right? In through your OpenShift cluster. So when you do a sync here, so let me show you the difference here. So it could be nice, right? You get the depth here. It's a compact depth. So here, on the right side, it says that this is what I should have. On the left side, you see what is there in the current state. So Argo CD will always try to have your right side current desired state map to your current state. So when I do a sync here, right? So here, now I go to my diff, there's no diff, right? And if I go to my, so again, I'm moving back to one part, right? As part of the replica. So this is the beauty of Argo CD. It will constantly keep monitoring and updating so that the admins can sleep well at night, right? So that's the advantage of Argo CD, right? And yeah, in fact, I can delete the whole application. The Argo CD will be able to build the whole application for me. So if someone thinks of deleting it, the Argo CD is always there to monitor it for you. Hello, I have a question. So I see Argo CD, is it like a console? Another console which we can use apart from OpenShift? Ah, okay, good point. So what you do when you go to your OpenShift console here, right? You get your OpenShift cluster manager, you get your cluster Argo CD. This is the Argo CD link you get from your cluster itself. So I have an operator installed here called Argo CD. This is called, we call it OpenShift GitOps basically in Red Hat, right? And then for Tecton, we call it OpenShift Pipelines. So this is my GitOps and then I will basically have a route to that in my routes here, right? So that route is pointing to my Argo CD basically, this link. So you can get the same link here from this section as well. Same thing is there for Red Hat OpenShift data science. I kind of like click this guy and it will point me to and get me. So there are other things you can explore as well here in the data science as well. So there are so many partners we have with whom we are partnered here. So you get OpenVINO, there is Intel, OneAPI, IBM Watson, Anaconda, then Pachydam, Selden is there, Starbus Galaxy, we are coming up with the use kits on Starbus Galaxy as well. Okay, now one more question. So what's the scope of Argo CD? Let's say I have my own namespace and one of my team members has his own namespace. So if I'm using my Argo CD, can I interfere with his project? No, you cannot. Argo CD, you need to actually get an access to that particular project, otherwise Argo CD will not be able to do anything. And Argo CD works across clusters as well by the way. Okay, so how do we manage this access? Is it like a service account or something? You place service account in there and then you map it back to your Argo CD. Yeah, so I just had a question. So let's say in the model that you're building, if you wanted to integrate multiple APIs into it, no matter what they are, how would that sort of work into the whole deployment process? So if you were having one... So you need to architect your application that way, like if you have multiple APIs, you would have to do it. But is it a single code like you just run and everything runs in the same interface, you have different interface, different API will have different access points. So you need to individually run it and then execute that. Okay, so the pipeline would not account for all the separate interfaces that you have. So in pipeline you can configure that you basically go and run everything in parallel as well. Okay, what if you wanted to integrate all your APIs in the same file or the same interface along with your model itself? Like if you have a neural network model and you want to integrate all the APIs along with it to have it in the same file of same, same program. Yeah, you can do that. One last question maybe. Yeah. This is also available for developer version, right, like can play around it. Playground is available for this. So I can't hear you, can you be loud? Times up. Hello, can you hear me? Yeah. Okay, right, like we can access it for free, right, like developer access is there for this. For what? For all this ML apps to play around. Developer access, you go to console.reader.com there's a minimal thing which you can do there for everything I think. Yeah, you should be able to. But yeah, the other thing is all this automation which I have done, right? Okay, all this thing is available on just last thing I wanted to show that and then I'm done. That's called agnostic D. Just check whether you guys have access to that. So this agnostic D has what we have is all the workloads which we run on OpenShift, right? Lot of them have been automated here. So you can use just Ansible playbook. Everyone heard of Ansible, right? How many of you know Ansible or worked on Ansible? Okay, a few of you. So it's an automation tool basically. So you can actually create, so these are all playbooks available for you to go and deploy your specific workloads, whatever you want on the OpenShift platform. So this is all open sourced. Okay, I think that's it. And if there are any further questions I'll be available here till the evening. So feel free to walk and walk to me and ask any questions you may have. Thank you for your time.