 All right, everybody, welcome. Excited to be here. We'll get started. Hi, everyone. Cool. We're excited to talk about migrating CI-CD today from Jenkins to Argo. My name is Kalen. I am co-founder and CEO at PipeKit. And at PipeKit, we help platform teams scale up their CI and data workflows using Argo. We provide enterprise support for Argo workflows and a control plane that enables self-service workflows for platform teams. We're also contributors on the Argo workflows project. So don't be shy. Feel free to reach out if you see us on Slack and GitHub. And I'm thrilled to be here today with Bertrand. Thanks, Kevin. Hi, everyone. My name is Bertrand. I'm a staff software engineer into it. I have about 20 years of experience working mostly on backend and cloud-native distributed systems. I've joined into it about a year ago as part of the CI-CD platform team, where my mission was to evaluate alternatives to Jenkins, such as GitHub actions, Argo workflows, and others, for next generation platform. Today, the goals for this talk is really first to understand the challenges of running Jenkins on top of Kubernetes at scale. Then we'll see how you can use Argo workflow and on top of alongside Argo CD to run your CI-CD pipelines. We'll go over the concepts of Jenkins and Argo workflows, see how they map, and also we'll go over a quick example to show the difference between them. And finally, if we convinced you, like we go over the things to consider if you decide to migrate from Jenkins to Argo workflows. Just a few words of introduction about into it. So into it is a financial software company. You might have heard or even used some of our products, such as TurboTax, Credit Karma, and QuickBooks. And really, the goal for into it is to help and empower our customers to make the best decisions for their finance using our AI-driven platform. Now, let's have a closer look at CI-CD into it. So as I mentioned earlier, the goal, we are currently running Jenkins on top of Kubernetes at scale. And for into it, it means about 6,000 developers, running 100,000 jobs daily. And in order to support this, we are running a Kubernetes cluster with about 150 nodes, on which are 200 Jenkins controllers. And we're ranging between a 7 to 1,500 build agents at a given point in time to run those builds. So we've been very successful with Jenkins so far, but it has his challenges, right? So one of the most complaint that we get from our customers is that it could be a little bit cumbersome and hard to figure out what's going on when your build fail. The UI is not very easy to use. It can be slow. So definitely, some improvement can be made on the user experience side. Looking at more operational considerations, such as high availability and disaster recovery, we're using the open source versions of Jenkins, and it doesn't come with those features built in. So we had to implement our own. Unfortunately, for the big Jenkins servers, it can take up to an hour to fail over one Jenkins to another region, which is definitely not matching our SLS. Again, on the operational side of things, there's no unified control planes with Jenkins, right? We are running about 200 Jenkins servers. And even though we have automated as much as possible, every time we need to roll out a new Jenkins upgrade or a new plugin upgrade, it's still a tedious task if we have those 200 servers that we are taking care aspects, basically. And finally, about the cost and efficiency. Jenkins is not a clarinative product. And when it gets to run on top of Kubernetes, the execution model that was adopted, it's to have one pod. And this pod will have multiple containers. But this pod and its containers will run for the whole duration of your build, right? So even if your build is doing nothing, usually waiting for user input to go to the next stage, those containers are going to be up and running and wasting your cluster resources, basically. Now, let me hand it over to Caitlin to have an overview at PipKit. Thanks. Yeah, we faced a lot of the similar challenges at PipKit. As a startup, it was mostly focused towards more, how do we have a lean and adaptable CI approach? Since we're a control plane for Argo workflows, our value proposition for our customers is delivering a centralized place where they can manage their workflows and any tools or integrations that they're plugging into those workflows. We manage multi-cluster workflows and even integrate with a bunch of SSO providers for RBAC. And as we've shipped more and more integrations and features, our CI quickly expanded. And this led to some challenges. It was hard for us to iterate and remix pipelines. We wanted to autoscale a lot more to minimize our costs as a startup. And from a maintenance standpoint, we wanted a CI approach that was a bit more set it and forget it, and lowered the amount of work we were continuing to do to tune our Jenkins pipelines. And finally, since we were going to be deploying with Argo CD, we wanted a tool that would easily integrate with Argo CD, rollouts, and events. And so that's where we landed with some challenges of Jenkins to summarize those. We're similar to Intuit. Our builds were running really long. Our CI and test pipelines were taking quite a while for each PR and slowing down our team. We wanted to get PRs reviewed faster. So how could we figure out how to do that? Another challenge was because we're running all those containers in a pod, it really limited the size of some of our pipelines and wasted quite a bit of cloud resources we felt. And we didn't feel like we could completely leverage spot nodes to keep driving down the cost. And finally, from a plugin standpoint, getting started was easy with plugins, but over time, the maintenance costs started to rise. And whenever we ran into an issue with our pipeline, trying to figure out which plugin caused the issue, or if a plugin update has a security vulnerability, all these sorts of complexities were adding up. And as a team, we were already using Argo workflows for data processing and other infrastructure automation. So we felt, hey, what could we maybe accomplish using Argo workflows as our CI engine? And so that really brought us into taking a look at, well, what does Argo offer and what do we need? And for us, the big benefits really stem from first running each step in a pod by default. And that then unlocked some downstream benefits, like dynamically provisioning resources for each of the steps in our pipeline. This was a big win for us being able to get more granular with each pipeline step, provision the right resources and auto scale it down once it's done. If a build is waiting on someone for approval, we can just spin that down until their approval's there and then spin up a pod to complete the pipeline. The other great benefit we had with Argos was parallelism by default. So being able to just define dependencies whenever they exist throughout the pipeline and then Argo will automatically just run steps that don't have dependencies and run in parallel. That helped us speed up our pipelines without much effort. Whereas in Jenkins, we had to be a bit more prescriptive about where are we gonna use parallelism? And if you ever change your mind about that, that's a bit of tech that you have to then refactor whereas with Argo, we're just declaring dependencies whenever we see them and Argo runs it as it sees fit. On the maintenance side, it was a lot lighter weight to deploy as you had another Kubernetes resource on our cluster, so that was awesome. And without so many plugin dependencies, it was a lot easier to maintain. And then of course, being in the Argo ecosystem was a benefit as we wanted to just seamlessly transition into deployment or running rollouts. Finally for us, we didn't have everybody on the team familiar with Groovy and writing Jenkins pipelines, so it was a benefit to just have yet another tool that we can write with YAML or for Python developers, they could use the Hera SDK to spin up CI. And so that really then brought us to what are the pros and cons as we approached it in our migration process from Jenkins to Argo. Giving Jenkins its due, it's a tenured tool and the community is really strong. There's a lot of great resources out there. Getting questions answered is pretty quick by just Googling around, so that's a plus. Argo does have a strong community now. Shout out to everybody here. But yeah, there's not as much of that documentation online, so we really encourage folks to get into the community and ask questions and engage with the community to figure those things out. Plugins made it easy to get started. However, for us, it quickly added up into a lot of tech debt for us and maintenance overhead. So although it was a pro at first, it became a con and we didn't want to spend time maintaining CI dependencies when we should be shipping features for our customers. From a UI UX standpoint, Jenkins is great and built for a CI experience. So we were used to some of those primitives. However, Argo workflows is more generic and so it can be used for data and other infrastructure automation that we were using. So that was something to just consider if anybody is going to migrate, there's a bit of like a UX difference. But all in all, for us, we really felt like the auto scaling and the parallelism benefits were really great going to Argo. And even though we had to maybe consider how we're gonna pass objects between steps in our pipelines a little more, that extra effort in the beginning to figure that out using volumes or artifacts, whatever you choose, actually ends up benefiting you from being able to achieve a better scale and efficiency in the pipeline. Before we dive into an example to just show how Jenkins pipeline maps to an Argo workflows pipeline, I'll just call out a few different concepts here. So obviously we had the Jenkins file and that maps to the Argo workflows definition and either YAML or Python using one of the SDKs like the Hera workflows SDK. A step maps to a task or step in Argo and a stage in Jenkins for us mapped to a template in Argo. So these would come in flavors, most popular would be the DAG template, but also a steps template that just declares a sequence of linear steps or even a script template where we could pass a quick testing script in Python to be ran as part of a step in the pipeline. The shared library in Jenkins really maps well to what's called the workflow template in Argo workflows and that's a concept that we then use to better parameterize and remix our pipelines, better than we could with Jenkins. And then with Jenkins plugins, there's not really a one-to-one mapping there. There are Argo workflows plugins to be aware of, however they're not built like Jenkins plugins. So I guess user be warned there, but there are exit handlers that we use to integrate with 30-party tools and even workflow templates really just were a lot of the Jenkins plugins we were using were like just Jenkins functions shared functions that someone else maintained and so we could just replicate those with workflow templates in Argo and maintain those ourselves. And with that, we'll hop into an actual pipeline example where we'll go through a basic CI CD pipeline with Jenkins and Argo and show you how things look. All right. Yeah, so here we want to demonstrate what a standard CI CD pipeline can look like with Jenkins and Argo workflows. So as you can see, we have a fairly straightforward one which starts with building a container. Then you want to publish that container somewhere in your container registry. Again, we're finding out on that stage to have like three steps in parallel so that's where we're going to publish some test coverage, running some security scans on the container and some static code analysis. After this is completed, this is where you usually want to deploy. So in order to add into it, we like to keep track of our deployments so we would first create a JIRA ticket for that deployment, then deploy the new container using Argo CD and if that is successful, close the JIRA ticket. And we're going to have a look at what it looks like concretely on Jenkins. All right, so I assume that most of you or some of you are familiar with Jenkins. So this pipeline, the first line is to basically reference the shared library that we're going to use along that pipeline. This is where we're going to have the functions such as podman build or podman mount. All those functions are going to be defined in that shared library. The first section here is a instruct like Jenkins where to run the agent. In that case, it's a Kubernetes agent. And this is typically where you would define your pod specification. So that's the pod I was mentioning earlier, the one which would have multiple containers running for the whole duration of your build. And then you can define your stages, right? Stages can be nested. One thing to notice as well is that there's no need to specify a git clone or a git checkout or something like that. It's already part of your Jenkins pipeline. It's been set up for you. So the first one is basically building the container. In that case, we're invoking those function podman build and podman mount. This one is just to extract the test coverage and something that is nice with Jenkins. It's like all the files that you have in your workspace are available to any of the steps of your pipeline. So those are going to be reused later. Then the next stage to publish your coverage, again using podman. Then at this section, we are going to run those containers checks. As you can see, in the case of Jenkins, if you want to run something in parallel, it has to be explicit. So in that case, we have a parallel section where we will actually reuse the test coverage that we have extracted from that first step here. They're still in the workspace and we can use them here. Then running some security scans and again some static code analysis. Finally, we have the deployment stage where we are first, again, creating a JIRA ticket. Then we have deployment using Argo CD. So we're using that GitOps deploy function which is using Argo CD and other hood on our QA environment using the new image name. And once this is successful, we will just like close that JIRA ticket. And that's it for the Jenkins pipeline. I'll hand it over to Kaylin now. Yeah, we'll take a look at the Argo workflow now. So we have our Argo workflow definition here. And it's breaking out into a few sections. So the arguments that we'll be passing through the workflow are volume claims, which is how we'll set up what objects are being passed through the workflow. And then our templates, where our DAG lives and the pipeline is defined. And then at the end, we can optionally define some workflow metrics we want to emit to Prometheus. So we'll look at the arguments here first. This is just all of the parameters we want to pass through the workflow, everything like the git branch, our container tag, JIRA number, et cetera. And then in this section here, we have the volume claim. And this is related to how you need to be a bit more intentional about setting up object passing with your workflow. So we're going to basically define the directory on the cluster that we're going to be storing objects as we pass between steps. And we set the access mode here to read write many so that we can enable parallel read writes throughout the workflow and achieve a higher level of parallelism as we're running our pipeline. Getting into the actual pipeline and how this maps compared to Jenkins, this is the templates. And what we're doing here is setting up a DAG template with several tasks. We do have to define a git checkout on our git processes here, getting our shaws, et cetera, before we then get into the container build. And what we're doing here is using what's called the template ref to reference workflow templates. And if we've applied all those workflow templates to our cluster, workflows will automatically reference those. So we have a directory here with our Argo workflow templates. And we have everything like our git checkout defined here. And we just reference that in our workflow manifest in this definition. And that makes it much easier to then iterate on a new pipeline by just referring to the workflow templates and passing in different parameters depending on what that pipeline needs. Getting into the container build, we see here that it's also using a workflow template. And it's then just depending on the git checkout and git info steps to run. So that's how we're declaring the shape of the pipeline and the order. And if nothing has that depends, then it'll just run in parallel automatically. We are running our unit tests, doing our container scan, and then getting into code analysis as well. And this is where we're passing on those PR parameters to run the code analysis there. And everything is still running after that git checkout and get git info. Then we get into creating the Jira ticket where we once again have just a update Jira workflow template where we can define how we want to update Jira, pass in parameters for opening, updating, closing, et cetera and use that template throughout the pipeline whenever it's needed. This step here is our deploy application template. And we're passing in our arguments here for Argo CD to use to then run the deploy. So that's where we effectively have like a nice seamless integration from Argo workflows to Argo CD in our deployment. And then we wrap it up with the updating Jira again at the end. And as I mentioned, we do have the optional ability to just add in a simple native way to emit Prometheus metrics from our Argo workflow. So we've done that here with adding a duration metric for how long our pipeline's running and then just counting our results for successful and failure runs. So that wraps up just the quick walkthrough. Again, if you want to check out this walkthrough we have the examples here on a GitHub repos and make sure to check that out. In that repo, we share the workflow templates and that structure so you can see how Argo workflows works for this use case. And also in that repo, you can check out a working example where you can run it locally or even on your cluster. So after going through that, I'm sure everybody's ready to migrate all their Jenkins pipelines as soon as possible. It's, yeah, it might seem a bit daunting. So don't have fear. There is another way to migrate than just taking a full weekend with no sleep to make it happen. And that's really where we recommend is taking it by in a piecemeal approach. So at Pykit, the way we approached it was first by just triggering Argo jobs with Jenkins and not having to completely migrate off Jenkins all at once. This really enabled us to get a feel for how it would work, how stable it would be and get that buy-in to then have confidence that we can move over larger pipelines. And then when we did start moving over, we really started with some of the simpler tasks in pipelines and that was easy for us to then figure out how do we want to migrate everything else like our complex Jenkins pipelines. Other tips to use when you're migrating is to quickly adopt workflow templates and parameterization and think of each step as something that you can reuse down the road. And this will really enable you to then accelerate your migration as you get more familiar with Argo workflows. And lastly, don't forget to just tap into the Argo community. Feel free to shout questions there. A lot of other users have now started migrating from Jenkins to Argo, so those of us that have are happy to help out. As for next steps, like I mentioned, this repo here is a great resource to kind of see how you can map out your migration and also play within a working example. So we really encourage you to check that out. And we also have a Virtual Cluster where you can set up a pipeline of your own with pre-built examples and not have to really worry about configuring your local cluster or anything, but just spin up a Virtual Cluster that would provide for you and just run that on your own. So check that out if you're interested. Yeah, and also into it loves open source and is a major contributor to Argo CD and Argo Rollouts. So if you want to check those projects out, feel free to use those QR codes and please come and visit our booth. It's right outside that room actually. I will have one also for during the coupon. You can get some cool swag and you can meet the Argo CD maintainers. Maybe if you have a PR to get merged, you can get them in real life and talk with them. And we'll be happy to see you there. That's a steep promise, yeah. Yeah, so we'll take Q&A again. Feel free to check out the resources we have on GitHub here for you. Happy to engage and help out if you're interested in migrating from Jenkins to Argo. Thanks. Thanks everyone.