 Okay, welcome to the, this is the third session of the developer track at Scale 14X. It's sponsored by Percona, and I have to, you know, read their little marketing blurb. With more than 3,000 customers worldwide Percona, Percona is the only company that delivers enterprise-class solutions for both MySQL and MongoDB across traditional and cloud-based platforms. So, you know, I appreciate them sponsoring this. And I'm gonna go ahead and turn it over to Art Hyler Coy. He's a Jinkins infrastructure lead. So I tried to do something different for this presentation in that it's all self-contained in a Docker image. So if this is how you can run my presentation, if you're on the scale 14X Wi-Fi, and you wanna see me get beat up by a bunch of people in yellow shirts, everyone download this at once. It's about 110 megabytes, and they will probably hurt me pretty badly if everybody does this at the same time. But it'd be funny to watch, I mean. Anyways, so I'm assuming that most people here have heard of Jenkins in some form or fashion. Jenkins has been around for about a decade. A lot of people use it for continuous integration, or continuous delivery, or automation of any form or fashion. And the really big strength of Jenkins is this very broad plugin community. So there's about 1,000 plugins, and to date I don't think I've found a use case that I had for Jenkins that there wasn't some plugins to make it nicer for. And because not a lot of people know this, I wanted to highlight this. There's this organization called the SPI, which stands for Software in the Public Interest. And that's who we're affiliated with, and they are the same sort of organization that holds the Debian trademarks, and I believe Postgres, and a couple other projects who are fully formed, mature, open source project, and I'm really proud of the work that we've done to get there. And I wanted to highlight some of the sponsors of the project, and these are not sponsors in the traditional sense. These are a lot of companies that, for one reason or another, have chosen to participate in the Jenkins community. Cloudbees is a big one. Cloudbees is my employer now as of a couple months ago. They do Jenkins Enterprise, but they also fund a lot of development on Jenkins open source, which we're really thankful for. And the second really big one is the Oregon State University open source lab, and you'll find Justin and Lance running around at the conference today. The open source lab is one of the primary reasons that the Jenkins project has been so successful when we parted ways, so to speak, with Oracle a few years ago. The open source lab really helped us out a lot. And then Peter Doody, DataDog, Atlassian, and Rackspace Red Hat, all of these companies have helped us out in one form or another. Without expectations of payback or anything, I like to reference them because I think it's a nice thing to do. They support us in various ways, like DataDog has really helped us out with some of our infrastructure and Peter Doody as well. But these are some companies that support open source just because, and I think that's really cool. Need to even. So I work for CloudVs, but I've been in the Jenkins community for the past seven or eight years now. I originally started because I was using, it was called Hudson at the time, and the project was on Java.net, and I couldn't find answers to the questions that I had, so I went to IRC, which is how I solve all my problems as I go complain on IRC until someone points out how dumb I am and I fix something. So I started participating with the Jenkins project then, and as we grew, I started to participate more and more, and not being a Java developer, I started to participate in the infrastructure side of things, and as of last year sometime I became a Jenkins board member, and I'll be one until someone votes me out, I guess. But the talk that I wanted to give is largely around infrastructure, and for the scope of what I want to talk about, this is what I mean by infrastructure. We all know machines like an AWS EC2 instance, or a physical rack that goes in, or a machine that goes into a rack somewhere. But we're also talking about configuration management, managing files, services, secrets, passwords and credentials and things like that, but also the packaging tooling to package up our applications and get those out, and the deployment tooling that drives those out into a production environment, and then the monitoring and alerting that tells us whether that's doing what we thought it's doing. That's infrastructure, as far as the next 30, 40 minutes or however long I talk, that's what I mean by infrastructure. In the Jenkins project, we have a lot more, like if you think about project infrastructures, like Debian, who thinks about all of the services that run the Debian project, like nobody, no one ever considers the actual infrastructure behind all of these great big open source projects. And so when I got involved in the infrastructure for the Jenkins project, we have a lot of stuff going on. We have a very vibrant plugin developer community, so we have to have built infrastructure for those developers. We have to have Jira and Confluence, so those developers can work not only on Jenkins Core, but also on the plugins effectively. And someone's got to be responsible for that. And we also have release infrastructure, Koske, who I think left, who is in here taking pictures. That's the founder of the project, by the way. If you want to come by the booth later, you can meet the guy who created Jenkins, which is a pretty cool thing about this conference. But he really believes in releasing often. So for the last decade, Jenkins has been released once a week, a version of Jenkins. And for better or worse, the size of Jenkins, the actual file that we distribute has gotten bigger over the years as well. So a distribution of the Jenkins application is 100 megabytes right now. So every week, we are releasing 100 megabyte archive to tens of thousands of people that are pulling it down, testing it, or evaluating it for one reason or another. And that's just Jenkins itself. Then there's all of these thousand plugins around it, which all have to be distributed. And there's infrastructure that goes behind that, and someone has to be responsible for it. So in the beginning, and I wanted to give you guys a brief sort of history of how we came to be where we are with our project infrastructure. We were originally on a site called java.net, which was like SourceForge, but even worse, 10 years ago. And we were quickly outgrowing it. Like the only other successful project on java.net that I can remember was Glassfish. And Hudson at the time was becoming more and more popular, and Bugzilla that was provided through collabnet wasn't working, distribution, or excuse me, through java.net wasn't working, and distribution wasn't working. So we started to provision our own services to run these things. And Koske, who worked at Sun Microsystems at the time, his modus operandi for doing this was he would go find machines in data centers around Santa Clara, where he worked, and he would basically commandeer them. Like it was sort of fly by night operation. We would find machines in one closet and that became Jira. They would find machines in another and that became like a build machine. And everything was set up manually. So the way that I got involved with Jenkins Project Infrastructure is Koske gave me root access to a bunch of machines which had no history, which had no means of being repeatedly provisioned. And if you screwed something up on them, you screwed up the project because you could only log into that one instance of a machine and futz with things to try to get it to work. And that was awful, like genuinely objectively awful. So in 2014, I learned to puppet where I was working at the time and like someone in their early 20s when they learned something new, they're like, great, I'm gonna do this for everything from now on forever. And so I was really anxious to apply puppet to all the things that I was responsible for. So we deployed masterless puppet and we deployed masterless puppet because puppet with a puppet master was terrifying to me. As soon as I got to the point of provisioning an open SSL certificate, I threw up my hands and said, screw this, I don't wanna do it. I don't wanna play anymore, this isn't fun. And we deployed Nagyous and we didn't really have a development environment for puppet at the time. And I don't think we were unique in this case. I went to a meetup in 2011 in San Francisco and we had a testing infrastructure round table. And if you can imagine like 30 people sitting at a big conference room table going, how do you do it? And I'm like, I don't know, how are you doing it? Like that's how people were testing infrastructure in 2011 because they're just one of the tools available and we were no different at that time. So in 2014, we started working with a puppet enterprise because I asked nicely and puppet gave us puppet enterprise. If you work for an open source project, don't just ask people for free stuff. And if they say no, it's okay. But if they say yes, then you get cool free stuff. So we also started using Datadog and PagerDD at that time. And we started to make a more mature development stack as well. By this time, our spec puppet existed, which was really cool. Server spec existed and I'll explain more about what these are, but we had the tools for me coming from a software engineering background. I all of a sudden had the tools to describe my infrastructure, the same way I would describe and validate any other piece of software. And so this is where we are right now. But like I said, infrastructure is hard. And open source infrastructure is even harder if that makes sense. Like if you're in an operations role, someone is paying you to endure pain. When you're in an open source operations role, no one's paying you to endure pain. You're just doing it because you enjoy getting paged at night. And that's what makes open source infrastructure hard. And it's also why Jenkins infrastructure is difficult because no one wants to volunteer for some of those challenging things. And because we're cheap and we just take whatever people will give us, we have assets across four different data centers. Where I've worked previously, we had assets in one data center and that was difficult at that time. But for the Jenkins project with this Spartan infrastructure with very few people who are sort of guiding over it, we have a multi-site infrastructure which makes things difficult. Now this kind of brings me to continuous delivery and it's a logical leap. And I'm asking you to leap with me because it'll make sense soon. So the big reason that people don't like continuous delivery, don't practice continuous delivery in my experience talking with people is that it hurts. Like every time I touch production, something goes wrong. Every time I try to do a deployment, I've got to go through this checklist of 20 different things to make sure that someone somewhere has made sure that this is in the right spot. This is configured correctly and we can do a deployment. And the nice thing about continuous delivery is it gets better when you do more of those deployments. Like the more you focus on it, the more you take those things that suck and you focus on automating and fixing those, the better it gets. And so instead of once a month or if you're really unfortunate and work at a big giant corporation like once a quarter or once every six months, doing a big bang release that just obliterates an entire week of your time, doing these little incremental releases, you have these little incremental pieces of risk. And it's really the big difference with continuous delivery and another concept that people will talk about continuous deployment is with continuous delivery, our goal is to get changes we're making in a state that's ready to go live. That doesn't mean we have to deploy every single commit, but every single commit should be in a state of ready or not ready. And then if we decide to deploy, that's up to us, but everything should be vetted enough to where someone can pull the trigger if we need to pull the trigger. As in a business, you wanna do that so you can get new features, new changes, fixes to your customers effectively and reliably. In the case of open source infrastructure, you wanna do that because no one's got time to deal with broken deployments when you've got an all volunteer team. Yeah, I think continuous delivery is neat. But like I said, infrastructure is hard. There's a lot of different things here and there are very different things. It's not like everything is a piece of software here. Some things are just physical machines which have different requirements. But to get to that continuous delivery world, we needed two ingredients in the infrastructure ecosystem. We needed testability of our infrastructure and reproducibility. So I wanna focus on testability first because that's like, we're getting to Jenkins. Don't worry, we'll get there. But if Jenkins can't run tests and you don't have any tests to verify that what you're doing with your infrastructure is correct, all Jenkins is gonna do is deploy code for you. You're not gonna really get a lot of benefit from it. So if you're not already familiar with sort of, I would say the two most important strata of testing in the software world, we've got unit testing and acceptance testing. And I'll focus on unit testing of infrastructure first, but we've got a couple of tools and I mentioned these before. We've got RSpecPuppet, which we'll go through and then server spec for acceptance testing. And unit testing in Puppet sort of requires you to know how Puppet works. If I could just get a show of hands, who knows how, who's used Puppet before? Most people, all right. Or maybe half people, half people, we'll go half.