 My name is Gopi Rebola. I'm in the city of Opsonex. Today, it's going to be a lightning talk. We're going to talk about migrating to Githops and from Jenkins, what are the options, what are people, what we see in the marketplace, and how are they approaching it. So, to just give an introduction, why move away from Jenkins. It's primarily driven by Kubernetes adoption, and also the DevSecOps makes it harder to use with the Jenkins, but it's primarily because of Kubernetes adoption. These are two parts to it. One is this application delivery itself, and the second one is managing Jenkins. As organizations are moving to Kubernetes, they want to run the Jenkins as well in Kubernetes. So, because of the model of Jenkins, you have the controller and workers, there are some issues that you need to deal with. If a worker goes down, it's not Kubernetes native, so you can have dangling workers, you have to go clean up, and if some problem happens, the troubleshooting is a little bit hard. Typically, it all works fine, but because the controller needs to scale vertically, as you are growing larger, you start getting into more issues. The other one, of course, is having the pipeline itself when you're using Jenkins, you built up a bunch of plugins with it, and when you're moving to Kubernetes, you want to do GitOps, it can get into more trouble than what you want. You can always make it work, but it's a lot more trouble than it gives you a written-on investment for. So then, what would you consider? So, what we have seen, there's a huge amount of interest in our go-very-large community. We've been working with it for about a year and a half, two years now, and so, it's Kubernetes native in the sense that it's built entirely with CRDs, runs inside the Kubernetes. Particularly for operations, the GitOps focus really helps when you're running in production environments, having the Git map to your running applications makes it is very easy. The Progressive Rollouts is one of the selling points here. It has pretty good support for Progressive Rollouts, either Kennedy Deployments, Experimental or Blue-Green, and with that, you can have automated verification and automated rollbacks. Also, this one, because of the structure, makes it simpler for onboarding. If you're a central team, providing this to large number of dev teams, it's easier to onboard them. For the developers themselves, it gives a fairly good interface to identify what's running in the target environment, what are the problems, and recover from them. It is a good thing to move to if you're looking for GitOps Kubernetes. If you're looking at Argo, what are all the things that you can do? It's made up of four different projects, really. But functionally, you're only looking at Argo CD, which is application deployment from Git to the target environments. Then you have Argo Rollouts. It's a separate project. You can use it independent of Argo CD or with Argo CD. Argo Rollouts is for progressive delivery structures. It's again a manifest that you can apply, and it does the progressive rollout for your applications. Then Argo Workflows is an independent project again. This is more like a pipelines. But it is again Kubernetes native. So you can have the YAML files, workflow definition, you apply, and you can run your pipelines within that. You have your custom stages that you can define as jobs. For the security, you can have individual users run their own namespaces. They can have managed their own secrets. You can work that independent of the Argo CD or rollouts. So to go a little bit more into Argo Rollouts, the reason we start with rollouts is it's an independent project. You can use it for the Kubernetes deployment, for the progressive rollouts, without using Argo CD or Argo Workflows. So one of the patterns that we have seen in the industry is when someone wants to use already using Jenkins, not necessarily ready to switch entirely to Argo CD, but want to get progressive deployments to their Kubernetes, they can go with Argo Rollouts. The advantages here are it has built-in blue, green and canary deployments. It also has some structure called experimental. What that allows us to do is automatic verification. So as you deploy your system with the manifest, you can have integrations with this Theo built-in. You have now plugins that allow you to work with any of the types of load balancers. So now it makes it easier to control the percentage of traffic and with that doing automatic verification. There are some built-in monitoring system integrations like here, we also extended few of them from there and support additional automatic verification on top of it. So how does that work? Let's say you're currently using Jenkins and you're using Qtl deploy to target environment. You have a rollout manifest, you simply apply it to that environment and you get Argo Rollouts. So with Jenkins, you can get automatic progressive rollouts with the verification. It's not necessarily GitOps, right? The only thing GitOps here is if you have changes to the Git that is triggering a Jenkins job and that's you're applying it. In partial ways, it's GitOps but it doesn't have this drift detection and other advantages you get from it. This is one of the patterns that we have seen originally being used by organizations who want progressive rollouts. About 50% of the people who use Argo CD use rollouts, not everybody uses them because it feels a little advanced and you also don't want too many manual stages in there because that slows down the deployment and makes it dependent on humans. So whoever moves to rollouts typically goes with automated analysis in some form that's connected to that monitoring system. And the next one is the Argo CD. Now, how do you use Argo CD with Jenkins? This is short of fully migrating out of Jenkins, right? The common pattern that we see is Jenkins is still used for CI or any of this workflow process, pipelines. And output of that is an artifact that gets updated to your OCI registry and Argo CD takes over from that point. So you have the deployment manifest in the Git repository that you want to deploy for your application. And there is an additional controller called ImageUpdater. What that is, it uses the Argo events, connects to the Docker repository. It actually goes and updates the Git saying there is an image change. There is a hidden folder, kind of a dot image kind of a folder that's in your manifest files. And Argo CD understands that mechanism. So when it sees changes in the Git, even for just the image change, it identifies that there is a change and does the deploy to the Kubernetes and syncs with the application from the Git. So this way you have Argo CD purely doing the deployments and Jenkins doing your workflow CI and the rest of the things. So this is one of the common patterns that we see there. The same pattern you can use with other systems. For example, if you have GitHub actions that you're doing. So the same pattern works there as well in place of Jenkins. So you have full advantages of Argo CD for your deployments, but Jenkins for the CI and you don't have to change most of the plugins that you've built in Jenkins and things like that. But that doesn't fully take you away from issues with Jenkins. So now the last one of course is you also migrate whatever workflows or pipeline that you're doing with Jenkins into the Argo workflows. If you're familiar with Argo workflows, it's a manifest file, it's again a CRD that you apply to Kubernetes. You have different stages similar to Jenkins, you have steps, but there are some differences between how you think about the Jenkins pipeline versus how you think about the Argo workflows. These Argo workflows, each of these stages is like a Kubernetes job that runs in Kubernetes. So the way you think of them as more like by default they are parallel and you have to set them up to have them sequential and you can construct a DAG to run with it. There are other scalability issues. The way Jenkins scales workers horizontally but controller vertically, here all of them are using Kubernetes resources. So it's mostly horizontally scaling, but you don't have this advantage of having single service that's serving everyone. For example, if you have secrets management system, if you're using a job and there are 100 applications or 100 development groups running in parallel, each one having their own namespace, you never have 100 instances running to get the data instead of having one instance doing that same job. So there are scalability issues that are different with Argo workflows, but it's being used at scale now at few of the organizations. So there are techniques to work around those, but essentially what you need to remember is the issues that you see with Argo workflows are different from the Jenkins. There are a lot of advantages running in Kubernetes. You give a lot of control to the end users. You can have the templated structures, the required stages in those pipelines that you can publish for people, for development groups to use, and they can use the same structures or change the structures and deploy. Because it's running in Kubernetes, you can have policies set up in the Kubernetes as an admission control also. So in a pipeline, for example, if you require someone to have a vulnerability check done and the vulnerabilities need to be critical or high means to be zero before they can proceed, you can set those policies up fairly easily. So this allows you to do those policies at the target deployment and leave the developer CA systems without having to enforce certain things. So the developer dev test can be really fast, but at the time of the deployment, you can run the policies and check with them. So that's a fairly quick overview. So in summary, there are different patterns. You can use rollouts and Argo CD without completely moving away from Jenkins. You get advantages of progressive delivery and GitOps deployments, but if you fully want to take advantage of Kubernetes, then moving away from Jenkins pipelines into Argo workflows makes a lot of sense. You do need to remember that it's moving to Argo CD is fairly simple. And usually the developers are very comfortable using Argo CD with this interface. They can see the status and the target, get logs and all that very easily. But coming to workflows, it gets a little bit more complex. So that's one thing you have to take care of. Typically what we have seen is the thing that works best, take one or two applications, have that process go through to the production all the way, and then once all the kinks are worked out, apply those to the rest of the applications. The advantage, good thing here is that you can do partial applications, move away from Jenkins to Argo workflows and Argo CD, while others are still running in the existing system, and then move them all over once you figure out the policies from end to end. So what we've seen is the significant gains in productivity, particularly the GitOps for Kubernetes deployment. It's usually a pretty good return on investment with the developers and operations. So that's the lightning talk for today. Any questions? Just asking if the basic benefit would be, for operations, like to save the VMs and Jenkins deployments and stuff like that? Yes, the two parts to it. One is operations of the Jenkins itself. The other one is Jenkins pipelines does allow to have structured steps that we can check, but typically there's a bunch of groovy code that also get developed by developers. It is, that makes it very difficult to have SecOps policies applied to them. So if you're moving towards the thing saying that I need the SecOps policies as well, so then moving away from Jenkins pipelines helps. Just another thing to ask, what's the experience for us, like a lot of our developers love Jenkins because of the groovy thing. I can do whatever I want to code it up with Argo. It's like we are sort of standardized, like this is the way the pipeline is. You can't change anything. I'm not sure what was your experience with that sort of situation. No, it's not exactly like that. So one thing that you can have custom jobs as part of the pipeline. So developers typically can do anything also, but what it has done is put the boundaries, right? There are steps and then each of the steps you have the containers that are running the code. So then you can specify for SecOps policies, you need to run this specific version of container for that policy. But developer step, for example, they want to run the test, test their application. They need to be able to do what they need to, right? That they should be, they will be able to do that. But because you have the boundaries of jobs, then the central team can control what they need. At the same time, giving flexibility to the dev groups to do what they need to do in terms of programming. Hi, thank you for the presentation. Just piggybacking on his question. It sounds like it's microservices requirement for Argo. How would that treat a monolithic application? Monolithic application. So here we are mostly talking about Kubernetes. So if you have a monolithic application that's converted from VM into a container and you're running it, you would still be able to run it. But it's just that because it may be stateful, it may have other requirements. And so you need to worry about those kind of things. So nothing outside of Kubernetes, probably. I mean, right now Kubernetes API controllers, we are able to extend them to do other things as well. Theoretically, you could do that, but Argo, in its core design, is towards Kubernetes. So there is a terraform kind of thing that you can apply to bring up infrastructure outside, et cetera. But it's mostly designed for the Kubernetes. Thank you, everyone. Thanks for your patience. Thank you.