 Nipendra, Nipendra Kharaz is our next speaker. And we have Nipendra here. Hey, Nipendra, how are you? Hey, all good, Karan. Thanks for having me. Good morning, everyone. Glad to have you here. Thank you. So Nipendra, you're gonna give us understanding of GitOps with Argo CD, right? Correct, correct. That's amazing. I mean, as I mentioned before, Argo CD is one of the most popular GitOps, Cloud native GitOps tools out there. So great. So Nipendra is the principal consultant at Cloud. Cloud, you got technologies. And most importantly, he is one of the, are you one of the, you've created the course on Kubernetes introduction, if I, on edX, right? Yes, on edX is taken by more than 200,000 users, right. And you know what? I am one of them. So great stuff. We are on the community in Nipendra and fantastic to have you here on this show. And I think we can get started. Sure. Thank you. You can share the screen. Yeah. Hope it's live. Okay. All right. This is Nipendra and yours. Yeah, thank you. Thank you, Karan. So this is Nipendra and I'm founder of Cloud. So as I promised that we're gonna do some hands-on, so I'll jump directly to that. So on our website, you can follow along with me if you want to kind of do the hands-on along with me. So I'm going to go to my website and we click on the hands-on labs there. And then we have multiple hands-on labs which we can try. So we're going to look at the CHD with Jenkins and Argo CD. That's what we have promised to finish today. I have only triggered the lab a bit earlier, but if you are on the page right now, you need to click the button of lab setup that takes around two minutes to come up. But once the lab has come up, you can follow along with me. So first we'll look at the demo of CHD with Jenkins and Argo CD and then we'll talk about some additional concept of Argo, like rollouts and so on. Okay, so let's kind of go ahead and do that. But as I talk about some of the basic stuff, I'm going to just trigger the Argo CD setup so that it is done as I'm going to show the demo to you post that, yeah. Depend, is it possible to share the link if possible? Yep, you can share the link here. I don't know, how do I post it here? Okay, is it open link or did you do? Open link is open link. You just go to our website, we'll find the link and then you can try it out. Yeah, you can just. Very cool. Okay, great. So what is GitOps? As the name suggests, right? Git plus ops, right? So you're doing something with Git using which you can do some operations, right? So as we know with Git, we kind of store all the commits and we know who did what and when, right? The same thing we're going to kind of bring in with operations now. You basically kind of say, you put your applications, it's configuration in a Git and then somehow it gets picked up by some tooling and which deploys the apps in your environment, right? And that's what GitOps is, Git plus operations at a high level. It is declarative and all those things which you can just kind of read through it. Argo CD is one of the tool which kind of implements this GitOps on Kubernetes. So we are going to have our YML files. We are going to have our Helm charts or whatever in a Git repo which Argo CD can pick it up and then deploy it on the environment there, right? Whatever Kubernetes setup you have. So this is what we're going to do today as a complete workflow. So there are two source code repos which we're going to play with. One is a source code repo of your source code which is your actual app code, whatever you have. And second, Git repo which contains your Helm chart, right? So as a developer, we are going to commit this thing to our code to the source code repo. From there, we'll have some kind of a Jenkins or I mean in this case, Jenkins running which kind of pull your source code and if there's some change, it's going to pick that change. Build an image. Once the image get built, that image get pushed to the registry, whichever you would have, in this case Docker Hub. And once we have done the image push to registry, we are going to update our Helm chart using which we kind of deploy our apps with the Helm thing and so on, right? So Argo CD is going to pick up that Helm chart change because we would have updated our image version and so on there because the change on the Git repo which Argo is watching, it will pick up that change and deploy it on the Kubernetes cluster what we have. So we divide this thing in two different parts. We'll first combine this A and B step in which we have a Git repo which contains our Helm chart and we're going to have this Argo CD which is going to deploy this app for us, right? So we have, I will say that, right? I triggered the Argo CD setup earlier that should be up by now. So we can check that up, that Argo CD is up or not. So Argo is up and running here. So what you have also on the top right of the page where you can find the link how you can access the Argo CD. So I click on this and I'm on the Argo CD UI. I can then log in with the admin user, the default user which I'm going to use and now to find out the password of the Argo CD we are going to run this command and this command gives me the password of Argo CD. So I just pick up this up and go to the Argo CD UI and sign in. This is going to allow me to sign in to the Argo UI. Now I'm going to basically fork a repo which contains our Helm chart. So if I go to this Helm chart, this contains the Helm chart using which we're going to deploy a sample RLCP app which contains a Python frontend and MongoDB backend there. This is like contains, as you say, that a chart configuration here. I'm going to fork this up under my username here. This has been forked. Now I'm going to go to the Argo UI and deploy this particular app. So now to deploy this particular app with Argo UI we'll click on this new app. We'll give the name of our app whichever you want to give. Project default is a project for the Argo CD so we can just keep a default one. I'll keep the sync policy as automatic which means that if there's a change in the forked repo which we are going to use, Argo should sync it up automatically. There are a few options here which you can pick it up. Prune resources which means if you've basically changed something in the, some YAML file in the Helm chart or something like or your whatever, Helm chart or whatever YAML you have been using then prune those resources because those files have been removed but for now we just check that up. Self-heal means that you can, if somebody kind of modified the objects in Kubernetes by mistake or something like that, self-heal it. So that's all I can kind of talk about it because of the time constraint. Then we'll give our source code repository which we are going to use to deploy the app. So we'll click on this Helm chart and say that like I want to have this particular the GitHub which contains our Helm charts and in which I will say, my Helm chart is on the top, so this is dot here which saying that look at the dot here. Now, next step is where would you all deploy this particular applications so we'll deploy on the same Kubernetes cluster for now but we could configure to deploy on some other cluster as well. So I'll go and deploy in the local cluster, give a name space, name as app. What I can do also is I can create here, auto create name space. This name space would be created on the fly as I'm deploying the app. So I'll keep the values as it is and I'll click the Git button here and now this is going to look at the Git repo what I have and start deploying my application. So as you can see, something is happening behind the scene. It figured out there is a Helm chart for front end which is the Python based app and there's a backend MongoDB app and it's deployed the respective services, deployments, pods and so on and that's what we are seeing here. We can kind of go back and check that out. Is it happening or not? So I can say Qtl get pods minus an app and as you can see this particular pods are coming up. So we have one replica of the backend and three replicas for the front end. So let that happen. I would now go ahead and deploy the ingress using which I want to access that app. So I'm going to trigger this particular ingress configuration and now our ingress should be up and if I go up again, there is going to be a UI or the ingress link which I can click on this and this is going to show me the UI for our applications. Hopefully it works as expected. Yeah, so as you can see about app is now we can put down some information here if we need to but otherwise we can skip them. If I refresh, we can see that the information is going to come from different replicas of my environment. So as you can see this changing here, so that's all good. Our app has now been deployed. So we have done kind of first part of it here where we have configured an app using the Orgo CD. So we have a Helm chart which has been deployed by the Orgo CD. Let me just trigger the Jenkins as we kind of speak here because you will take some time as well. So Jenkins is going to now get installed as we just talk here. So Orgo CD can pick up in this case we have used Helm chart but I could have used a YML file or maybe a customized thing or whatever. You pick it up. I mean you decide how you define your applications or else you will pick it up and then deploy. Now the next step what you're going to do is we are going to kind of perform the CI stuff now in which we would basically going over our Git repo. Let me show you the Git repo for that. There's a Git repo here which I open it up. This Git repo contains the source code which is required to the front end source code which I want to deploy. So we go to fork this repo. This has been forked now and here we have this front end RSVP.py file and there is this template in this we have profiled HTML file which basically shows you this kind of a thing. So we'll change something here and we'll see how it kind of different change in the actual deployment. So we'll update our source code something in this particular front end and we'll see that change eventually get deployed by the ROCD behind the scene. So what we have done is we have forked that repo under our anchorage username. So that's what we have done now. Let's go back and check if our Jenkins spots are up and running. So Jenkins is kind of coming up so we need to kind of give it some time before it comes up. So what we can, yeah, so we can talk about the workflow here. So what we'll do is we are going to configure, I mean we already got the source code repo. Now in the Jenkins, we are going to configure that polling part. So we'll poll every one minute of our repo and once we pull it up, we are going to kind of build that image. Now how do we build the image and how do we push it? For that, we are going to use Jenkins files and I'm pretty sure you're aware of Jenkins file. That's the kind of a pipeline script which you would be building up. So let me kind of quickly talk about that pipeline as the Jenkins setting it up. So what we have in this Jenkins pipeline is we have two steps here. One is a build stage or step or stage, whatever you say, so build stage and the deploy stage here. So what we're going to do is we're going to create a dynamic Jenkins slave. As we do the code commit, a dynamic Jenkins slave is going to come up which kind of runs a dint Docker in Docker which would perform two steps here. One is build step in which we basically get our source code repo and build an image and then once that image been built, we are going to push that image into our Docker hub accounts. Of course, we need to kind of have a credential so we'll configure that as well. Similarly, then what will do the deploy stages? We are going to whatever the new image we have built that image would have a new image tag. So what we'll do is we are going to go to our, the Helm CI CD Git repo and here we are going to update the values.yml files. Assume that you are aware of the Helm chart working. So we're going to update this values.yml file on the fly and that way we would change our Helm chart which would be picked up by Argos CD2 to kind of take the step forward. So let's see if our Jenkins up and running or not. Jenkins up and running. So let's go ahead and look at the Jenkins UI now. This is going to bring you the Jenkins UI. So again, the username here is admin. From the password of Jenkins, I can just come down here and figure out the password of Jenkins. This is my Jenkins password. So I'm on my Jenkins. Now what we'll do first thing is we are going to configure credentials for our GitHub and Docker hub so that we can push the Docker image and update a source code rep. So we'll go to the managed Jenkins here. We'll go to managed credentials. Then we go to global here and then create two credentials here. One for Docker hub, one for GitHub. So I'm just going to say for nQuery is my username for my GitHub. The password I'm going to use with my personal token here. So I'm just going to copy that token here. That's going to be my thing for my GitHub. I'll give the ID as GitHub here. Save it. Second thing I'm going to configure for Docker hub. So I'm just going to give a name again, whatever the name I have for Docker hub. Give the credentials of mine. Give an ID as Docker hub, which we are referring in our Jenkins pipeline file. So this is being done. Now I'll come to dashboard here. So we have configured our credentials to GitHub and Docker hub, right? So we have done that part. And we have also forked our repos. This is the forked repo. Now on this forked repo, we are going to build this JenCon talk, create this new file called Jenkins files. I'm just going to go to the Git repo here, add a file, say new file, give a name to Jenkins file, copy the content which I have here. And now I'm going to change the values here because I'm going to modify it according to my username. So I believe you have to make three changes here at the line number 53. We'll specify under which username. So basically whatever username you used to fork it, just give that name here. I give my incorrect. I'll give my email ID. One more change, I believe, recall, let me just check where the changes. Yeah, so here is the Docker hub change. So these changes you make, so like we are saying that under which account you have to send data. So I'll just kind of go ahead and commit this particular file. So now I have added the Jenkins file in our source code repo. So I'll go back to the Jenkins and say create new item, give a name to it, the item name. It is a pipeline stuff. Say, okay, again something I can specify here. And I won't trigger this whole SCM every one minute. So I'm just going to take that, what do you call it? Say the schedule here. This is kind of say, do it every one minute. This is what it is for every one minute. Then I'm going to the pipeline script. I'm going to get it from my SCM, which is Git. So I specify Git as my SCM. And now here I'm going to give my source code repo URL. So I'll copy that stuff here. This should be done. And this will pick up the Jenkins file on the root of it. That's true. So I'm just going to save the file here. And that's it. So what I have done is, I have now configured the Jenkins pipeline so that it can trigger as soon as I make a change. So let's go ahead and make a change here. So it's triggering up, but I'm going to cancel the trigger for now because I don't want it to take a lot of time as I would change it. So I'm not going to go ahead and change my source code here. So let's go ahead and change the UIs, I mean my applications UI change. So I'll go to the templates here. I'll go to profile.html. And I'm going to change here. So I'm going to enter your name. So this code added the file here. Enter your name. So I'll just go to say here, enter your, so capital I'm putting here, enter your name. And just I'm going to put a definition here as just for as we are here. So enter your email. I just add definition here, nothing else. I'm just going to save it here. So now as I'm going to save the file here, what should happen effectively is this should pick the change. So basically my Jenkins pipeline should pick up the change. I think it will do it every one minute. So as you can see that, I think did it pick it up or not sure. It's happening now. So as you can see now, it is kind of put them to change. And it's going to now create an on the fly Jenkins slave. It is going to perform that step which we have mentioned there. And it's going to perform that build and deploy step here. So as that's happening, so as you can see it's kind of got the slave now. So let it happen. Let's wait for a while. And this is our UI as you can see, it just says enter name, enter email for now. So hopefully demo guards will be with us to kind of do all the stuff on time. Any questions on the screen? Basically if you have any questions, please put them and if current get put them on the screen here, I can see that. I can see those questions if there are any between because going to kind of pull up some images and pick up to the steps. So we just wait for these things to happen. Any questions? Meanwhile, so go, go. There was one question from Shubham but Sagar answered that already. But maybe it would be nice that this is building. Can we maintain, so the question is this, can we maintain dependency between apps deployment like A, B, like app B needs app A to be up and running. So yeah, as you can see here in our chart, what we have is already like that, right? Like here we're deploying a backend front end, right? This is a Helm chart thing, right? So of course this is with the Helm chart, but of course in the Argo, you have Argo workflows or your workflows. So there you can make dependencies in a better way. But even, so those are steps or those are kind of a different thing. For example, you want plus it's, for example, if you're configuring Argo CD to kind of first deploy the cluster on AWS, then perform some steps, right? Something like that. So those are the workflows which you can do with Argo CD. So I want to do workflows that thing for that, which can help you. But here I'm talking about in terms of packaging. So because I require the backend with front end, so I'm maintaining the dependency with the Helm charts here. All that answer your question. Yeah, so now as you can see that, it has happened now we are kind of pulling an image of Python and then it's going to perform the steps like it will take some time to get the, so we are kind of building our MSC. If you kind of look at what would be happening here, we have a Docker file in our source code and that Docker file is being, I mean, so that requires Python to be pushed in, pulled in first. So we are getting that particular stuff here. So if I kind of go back here, use this Docker file is here in our source code. We're performing these steps right now in our pipeline step. So once this particular image would be, so we are performing the steps to build an image for my application. And that's what's happening now. So now image has been built. Now it has been pushed to the Docker hub account, what I have. And once the image get pushed, you will see that it is going to change the values of YML as the push would finish. So now that has been done. Now I'm kind of cloning the Helm chart repo. And in that, we would basically, I mean, there is the YQ tool here. And then it's kind of done those changes. So whatever we changed required in the particular Helm chart is succeed. So basically if we kind of go back to our thing here, our pipeline succeeded. And now if I kind of go back to here, you can see that it would be changed here, right? 10 seconds ago. And what change we have done is, we would have basically changed the image repository and the tag. You are using what we have changed there. And now what should happen is Argo CD should be able to figure that change. So it kind of do it every three minutes. So other than waiting for that, I'm just going to first refresh this page here. So say that there's nothing is not running here. I'm going to now sync it manually. But if we wait for three minutes, it would sync and then it will continue. But I'll sync it right now. And I'm going to hit this cool button here, cool things. I'm going to hit the synchronize here. Now what's going to happen is going to figure out, it is seeing that there's a change. And as you can see, the new parts are coming from our front ends. And as those parts are going to come up, we would see our application eventually would be changed to the newer version. So we'll see something different would come here. Here it says enter name, but then it would now say enter your name and colon definition kind of whatever we have done there. So let's wait for the Argo thing to happen here. So it's kind of pulling the new image. And once the new image will get pulled in, so we can verify that with our UI stuff here. So we can again do this. As you can see it's getting created and this would be kind of pulling that image which we just pushed to the Docker hook. And as the image get pulled in, we would basically see the change to the new deploy. Now as you can see it's been changing here and we can see a similar change in our UI as well. So let's just wait for that to happen. And if I refresh the page, you will see that. We should see that in like as you can see, right? The new changes come here. So this is how Argo can basically help you. So basically you do a code commit on your Git repo, which kind of the, which kind of finish the entire cycle, the CICD and everything. Okay, so this is kind of first part of the presentation which I want to kind of cover. Again, it's like went very fast, but as I said that right, you have the entire stuff at your hand, you can go ahead and practice and do share your feedback with us or on social media, whatever you can think of. So this is the first part of it. Let's quickly go and figure out some additional stuff what Argo can do for you. So again, this is going to be primarily a very overview kind of stuff. So we have done a complete workshop in the Kubernetes community Bangalore earlier this year. So if you're interested to kind of find out what this is in detail, you can kind of go back and search for the, basically KCD Bangalore 21 videos and you can do the stuff or you can do one more details about it, right? Okay, so let's go talk about Argo rollouts. So if you talk about in Kubernetes terms, in Kubernetes you can typically do with deployment object, recreate a rolling updates on it. Kubernetes by default does not support blue, green, Canadian, process delivery, what you want to do in the advanced stages once you kind of figure out the basic commodity stuff, right? So for that purpose, Argo has come up with Argo rollout. So just to kind of take a step back. So there is an Argo prods. So Argo prods is the kind of a collection of tools. Argo series is one of them. Argo rollout second, then we have Argo workflows and Argo events. So there are multiple projects under Argo prods. So don't get confused, but these are event projects. So Argo CD we talked about is the Argo rollouts, which kind of can work along with Argo CD. And I think I used individually also, but I may be wrong there. So Argo rollouts is going to help us implement these three different strategy of deployments, right? So it's a drop in replacement for the deployment object. So now if I'm using Argo rollouts, I'm not going to use the deployment of it of Kubernetes. So we kind of have a YML file, which creates a rollout object. And with that rollout object, we build this, we kind of have the strategies there. So no deployment object from our site, we'll be building a rollout kind here with rollout count here. We are going to replace that, we're going to replace that deployment object. So now you can see here, this is a kind rollout. And to have this rollout thing, we have two services here, which are called a canary service and stable service using which we kind of manage our strategies of blue green and rolling out basically the canary thing. The way it works is, so we have this rollout object in which we define strategies, what you want to have. Then we have traffic routers using which we can route traffic like 50% here, 30% here and so on, something like that. And then we have this metrics provider using which we can monitor the performance of the, let's say newer version. So I'm moving to the newer version, I want to see how it's performing for that purpose. I can do this collection of matrices and then we can see what the result is that and then we can continue our canary updates and so on there. Let's look at some diagrams here. So what we have here, I said that, right? Rollouts, that is the beam done with the two different version of services. So here we have a stable service and desired service like a preview one. This is what we want to be there. So at the stable state where everything is all good, you have only one revision and both the services point to the same version of that. Then if I want to move to, let's say, blue to green, so I'll deploy a green version of the app and point only the desired or the previous service to the green app. And once I'm kind of comfortable with the green app is what I want, right? Then I'm going to even point that stable version to the same green version and then get rid of this revision one. So this is how the rollout can point to these two different services and help us achieve the blue green. So like, as you can see there's an ingress which we can have in front of our application and then we have the two services which can be preview on the stable one which point to different replica sets and whatever we saw earlier. So ingress would point to preview and the stable one simultaneously depending on the percentage value and so on and then we can move there. That's the blue strategy here. Similarly for Canary, then we can define the weights, right? For example, I want to kind of first, so canary simply means that, okay, when I want to go to the newer version, I want to see how does the newer one performs, correct? If it performs better, I want to continue. In blue green, you just move to blue out to green, correct? But in Canary, you want to kind of move from one to the version over time. That's what we have here. So here we can define the weightage, how you want to kind of configure that. And now again, like as you can see here, we're kind of saying that, okay, send 75% effect to the app V2 and 35% here. This is what canary does. So again, canary, again, we have two services there. So in the basic canary, the canary is done with respect to the number of ports you have. Like for example, if I'm doing the, if I want to move the 25% of the newer version, so I'll get only one part of the newer revision. And now because of three parts are here, 70% of the traffic will go here, 20% will go there, correct? And then I can move to the other way and then finish by moving all the parts of the newer version. This is the basic canary one. But then if I am going to implement along with some kind of an ingress-based thing, right? So I can configure my ingress, be it engine X, be it traffic, be it HTO one or so on. I can pick up any one of these and using which also I can consider the percentage of my traffic, not by the pod count, but the percentage what I want, right? For example, here I have stable and the desired version. I can, though I have four pods here, then I can say 95% traffic, go here, 5% go here and so on. So I'm not bound by the number of pods what I have, but because of this ingress-based canary, I can control the traffic flow as per my wish. Okay, so I'll quickly move away from here. Then the next is the process delivery in which basically what we're trying to achieve is, I want to make sure that when I'm kind of upgrading my app, I would basically do some kind of analysis, right? Or some kind of what you call, say, see that how does the new version performing and I want to go to the new version and so on. So for example, if I move to the other version and if my success rate, this is kind of the template which is there, if I run the template and figure out, okay, if my success is beyond the threshold, then I'm going to move my app to the new version. So this is called process delivery in which you go to the new version by some kind of a logic, which may be metrics from from Atheist or maybe a simple job, whatever you want to do, you just check for the condition and move on. Okay, last thing I want to talk about the workflows here. So workflows are simple, which is going to be like the steps you want to perform. Like for example, if you take an ML job, right? In the ML job, you'll kind of get the raw data. Then you kind of clean it up. Then you kind of run trainings on that and so on. This is how the workflows, you can define these workflows as a container. Like for example, here is a workflow in which I kind of start, I run the hello workflow, then I run hello two and two being parallel, correct? Other workflow, I want to kind of do this A, B and C in parallel and then it'll go to D. So I can kind of build this kind of dependencies with the Argo workflows. Yeah, I think that's it. I think we should be on time. Karan can correct me if I'm a bit late or earlier. Yes, we are right on time here. Great. Any questions? Thanks, Nipin, for walking us through this amazing journey of Argo and Jenkins and how these two tools could be used in tandem. So a few questions here from, yeah, can you explain what is Jenkins files? For Jenkins file is a file using which you decide how the CI should happen, right? As I said, right? See, okay, let me kind of go back here. This particular, if you look at here, see, whenever I'm a code change, right? As I'm a code change, I want to perform some step, right? I want to run test cases, right? Against my source code, whether they are done correctly or not, right? Then I want to build an image in this case. Once the image has been built, I want to push to the Docker hub. So there are two ways of doing it. One is you basically write those steps one by one in your Jenkins. So when you are building a Jenkins job, so if I talk about a other job, this is a pipeline thing that you put up, but I pick up a freestyle project, for example, I just give a name here, freestyle project. In this project, you need to perform all these steps one by one. So as you kind of say, build step, right? Execute shell. So here maybe you can write, okay, clone the source code, right? Then the other step can be, okay, build my Docker image build. So either you can write these steps one by one or you can basically put them in a kind of instructions, right? Those are called Jenkins file. So Jenkins files kind of live along with the source code. So you don't need to kind of write those steps manually in your Jenkins job. You define those steps what you want to have. Like in this case, we are saying, I have a build stage in which I want to kind of build a Docker, build an image from my Docker file and push it to the Docker hub, correct? Deploy a second step. So rather than somebody is modifying these steps writing one by one, you define them upfront so that you don't have to kind of deal with the Jenkins every time. You define in your source code repo, it goes as a source code repo and in whichever Jenkins environment you would be, you will come up and then it would start. Like you see that, right? I got a fresh Jenkins because I had Jenkins files, it could perform all that, just one by one. All right, very cool. Thanks, thanks a lot Nipendra for explaining this and I think we are at the time I look at it for this session. Thanks a lot for walking us through this great journey of Argo CD and Jenkins and I'm really happy to see Jenkins UI after a very long time. Yeah, I'd last remember my memory was like 2012 when I was working with, I was one of the Jenkins team but I was in a different team but Jenkins felt, hey, my Jenkins build is running very slow. Like back in 2012. So yeah, good to see Jenkins after a long time. Thanks Nipendra, thanks again, thanks a lot for joining in and helping us with this great session.