 Good afternoon. I'm Kevin Fox. We're going to be covering building delivery pipelines with Jenkins Pipeline as code. I wanted to thank you all for coming out for this presentation. This is the second time that I've done pretty much the same presentation in the Ohio Union. Every time I've been in the hardest room in the place to find and I've always been very glad that there were people who made the trek to find that room and come to the session. A little bit of information about myself. I'm the Enterprise Architecture Practice Lead for ICC. If you haven't heard of ICC, we're the largest privately held IT consulting company in the Midwest. We were in 2016 voted one of the best places to work in Columbus. Contact information is there, so if you end up having any questions following the session, feel free to reach out to me and I'll be glad to respond to you. Speaking of questions, I'm not going to hold time at the end for questions, so if we cover something you have a question on, just ask as we go and we'll try to address everything as we're moving along. I have a question for you, though. How many of you have actually worked with Jenkins already? Somewhat. Most of you are at least familiar with Jenkins and what it looks like and what it does and all that kind of cool stuff, I assume. Before we jump into talking about Jenkins, I do have a little piece of truth in advertising to deal with. I know this is a Python conference. However, there's really nothing in this presentation that's about Python at all. It is more generically about software development and so easily could be applied to Python, but as my Python knowledge is about this big, I really didn't feel comfortable trying to work that into the presentation and say something really stupid that showed my lack of knowledge. So I'm going to leave it to you to kind of visualize if you want to apply this to Python use cases. I'll kind of leave that to you or if you have questions after the conference and I can help out with a little bit of research that I did, I'll be glad to provide you any feedback that I've got. All right, so what is Jenkins? Most of you or a number of you said that you've actually seen or used it, but I do want to hit kind of out a particular aspect of Jenkins and what it is. It used to be, for all intents and purposes, a build server. But what it has become more recently is more of an automation server. What difference does that make? Well, if we think about the way that we used to use build servers, it was generally along the lines of something like this where either because somebody committed some code or because, hey, we had a schedule keying off once a day, once a week, whatever, we ran the build, we maybe did a few automated tests in there, and at the end we spit out some kind of a notification to let people know what the result was. And that was about the extent of what we expected Jenkins or any other build server to do. But things have changed. Most of you, I'm assuming, are probably living in something more of a DevOps type world now than what you used to. And so we really expect that any time we commit code, there's at least the potential that code can make itself all the way out to production. And we expect our delivery or deployment pipelines to address all kinds of issues along the way. We do see that we have here the idea that we have our build and unit tests still going on. But by the same token, we now expect to be able to coordinate with our provisioner or deployer, whatever it may be, to deploy code out to various environments in a somewhat automated fashion. We expect to be able to run functional tests in an automated fashion. We expect to be able to do push button deployments and testing in other environments, even for those things where we don't go fully automated. So there's a much bigger kind of an ecosystem in which we're dealing than Jenkins used to have to deal with as a build server. And as far as Jenkins is concerned, it's expecting to fill that role of being the orchestrator that manages the entire process for us. So that being the case, we expect that Jenkins is going to have to do things a little differently than what it used to do. So what exactly makes the difference? Well, in the Jenkins 1 world, we did things a certain way. Now that Jenkins 2 is out, a few things have changed. Do you want to highlight some areas where things have and haven't changed? In particular, let's say from an extensibility perspective. Well, Jenkins 1 always had a very robust plugin architecture, meaning you could extend the functionality of Jenkins to do pretty much anything you wanted to do. And if you go out and look, you'll find a boatload of plugins out there that worked on Jenkins 1, still work on Jenkins 2. That was a very effective approach, and Jenkins 2 carries that over. From a scalability perspective, Jenkins 1 already had the ability to run builds on multiple nodes. So you potentially have a single master, multiple other nodes that are part of the process. You could scale your builds as far as you wanted to scale them. That was good. It's carried over into Jenkins 2. But if I wanted to be able to define a workflow that things would go through as I, you know, what pushed code through that pipeline that we were just looking at, well, Jenkins 1 had the ability to have build definitions that, you know, what the build definition could be would vary depending on which plugins you had there. You could do things like pre- and post-steps or post-actions, and it was never really clear what the difference between a post-step and a post-action was. You could clone workspaces so that you could share them between the same multiple builds that got chained together. There's a lot you could do, but it was very cluegy. And it was never, there was never a single way to solve a particular problem when it came to defining your workflows. So as a result, this is the one part that did not carry over from Jenkins 1 to Jenkins 2, and it was replaced with a whole concept of pipeline as code. Now, what do we mean by pipeline as code? Well, from a pipeline perspective, it's that deployment pipeline we were just looking at. Jenkins is going to be the automation server, the orchestrator in that pipeline. We expect it to be able to participate in that. And code, well, just exactly what we would expect. We need to be able to define what happens in that pipeline in terms of code that gets written. So let's pull that apart a little bit and understand the implications of that. From a pipeline perspective, the key thing that Jenkins 2 has done is to find a standard way to look at the workflows that we define for those deployment pipelines. We have a whole pipeline abstraction where we can deal with things like the execution environment. Where does the build run? We want to have some control over that. We want to make a workflow up into a series of stages. So it's not just one big glob of stuff. We want to be able to define the steps in those stages. We want to have some level of flow over all of that so that we can make sure that we deal with the conditions that exist and all that kind of good stuff. But beyond just the pipeline abstraction itself, we have things like source control abstraction. As we saw, one of the key things that triggered a deployment pipeline is the idea that code gets committed into source control. All kinds of different sortings out there. And Jenkins needed to come up with some SCM abstraction that in action as we go through some of our examples. And then a visualization. And you see a very simple visualization there of being able to show, you know, all options to fit with their view of the world. But for right now, that's what you have. Now, having selected GitHub, first thing that's going to ask me is, okay, what GitHub organization are you dealing with? And I tell it. The next thing it's going to want to know is, do I want to create a single pipeline? Or would I like to maybe scan the entire organization looking for every repository that could be used or could house a pipeline definition? For the sake of this example, I'm going with a single new pipeline. And once I've done that, I tell it what the repository is. So I've got a simple hello world application out there sitting in a hello repository of PyOhio Jenkins. I picked that. That's what it's going to use. Yes, go ahead. In general, the expectation, sorry, the question was, if you have multiple repositories each with their own Jenkins file, would Jenkins consider those a single pipeline or would those be separate pipelines? The expectation is primarily that those are separate pipelines. Now, having said that, you know, going back and using the standard Jenkins UI, you could probably get the other to work, but that's not really what the expectation is. And as we'll see in a moment here, there are some things that if we do it the way Jenkins wants to do it, it will do some heavy lifting for us under the covers. And so you'll probably lose that benefit if you try to kind of work around their standard way of doing things. And we may look at some things that would help to kind of alleviate that as we get a little further into the presentation. So what you may find is that's not really necessary once we look at some of the things Jenkins is willing to provide you when you let it have its way. Any other questions before we continue on this? Now, one of the implications of that question I haven't touched on yet, which is, okay, this is all great, but I really didn't tell Jenkins anything yet about what actually has to happen, right? So the expectation is somebody somewhere has got to tell it what to do, and that is my pipeline script. That pipeline script is going to exist, just to prove to you that we actually did create something. Okay, there we see that we've got our pipeline for Pi Ohio Jenkins Hello. But going back to the idea of the script, okay, I've somewhere got to have that script and the most likely place to have it from Jenkins point of view is out in source control. And so here's my Pi Ohio Jenkins Hello repository, and you'll notice that I have a file down there named Jenkins File. And that is the standard file name that Jenkins is expecting to find. You can go change that, but again, if you do, you're probably not going to be able to use Blue Ocean. You've got to kind of go around behind the scenes to configure that. So for the most part, stick with the idea of using a file named Jenkins File in the root of your repository. Now, what do we really expect to have in that pipeline script? Well, this is a very simple basic Hello World kind of thing, but out of it we have to see some key stuff. Like for instance, that pipeline block, that is the outermost block of the whole thing, defines this as declarative pipeline. Remember we mentioned there was both scripted and declarative? Jenkins knows this is declarative because I put the pipeline block in there. Otherwise I have a lot more control over what the overall structure of my script is. One of the first things I find in there is where I define where exactly this build is going to run. Now, me being lazy, I basically say, well, run it anywhere you want to, which is what agent any means, and in a lot of cases that may be fine. You can do things like saying, hey, I want to run this on a node with a specific label, all kinds of options there. We'll even see a more complex option a little later in one of our examples where you've got a lot more control over the environment in which the script runs. The next thing I have is a block that defines essentially the work that I expect to happen within my script. It's the stages block. And as you might expect, a block named Stages will have a series of blocks called Stage. These are essentially the big chunks of work or comprehensive units of work, if you will, that I expect to have in my pipeline. Think of them as the top-level steps. Within that, within each stage, I have a block called Steps, and of course in there I have the individual steps that I expect to carry out. This being a very brain-dead, simple kind of a script, all it's going to do is say, hello world. Yes, question. So the question was, can you define your own stages or is there like a predefined set that Jenkins expects? The answer is you can define whatever you want, right? Exactly, yes. They will run in this order. Now there are some things that you can do to control ordering and so on, and we'll look at some of that a little later on or if not so much ordering, at least whether or not a stage executes. If I am in scripted pipeline instead of declarative, I have a lot more control. I can do things in parallel. There's all kinds of cool stuff I can do. And not to say you can't do that in declarative, it's just a little more limited. But yes, my stages can be whatever I want them to be. But if you think about what I tend to do in a build, I may have a part of it that does the actual build, a part that, say, does unit tests. Maybe I'm doing some kind of code analysis. Going on, I may do that. I may produce documentation. I may run functional tests, whatever it is. Each of those is likely to be its own stage. So that when I look at the result of the build, I'm able to see the success or failure of each stage and tell what's going on. It comes back to normal, good coding principles of what things I combined together to be able to have a well-written program here. Okay, so I have this script, and once it runs, I'm going to see a result in the Blue Ocean UI showing me what's going on. So for instance, up here, I see this is the result for a specific run of my pipeline. One of the things that we'll deal with a little later if we have time, but that ends up being a really cool feature that Jenkins provides you, is the fact that I'm actually, this pipeline applies to a specific branch in my source control, and it's telling me which branch this is. We'll see later why that matters. I also have a visual representation of the stages I went through. Now, if I think about my script, I only had one stage. The fact that it's green is good. If it were red, it would mean it had failed. If it were yellow, it would mean it was unstable. All those normal kinds of things. If I had a more complex workflow, I would see all the different stages lined out there and showing me what's going on. And then finally, I have down here, I think I have down here, let's see, there we go, all the different steps within the stage that I happened to have selected at the moment, and I can even see the log messages related to those specific steps right here. So I click on a step, it'll expand down, show me all the log messages. I don't have to go hunting through the entire log to find out, okay, what happened on that step in that stage? It's all right there. Now, that doesn't mean that that shows me everything that's in the log. There are actually some things that you may want to go elsewhere in Jenkins to see what's going on, but this gives you a good portion of it. And in fact, if I look under the surface, I go look at the actual console output, I'll see some cool things that are happening that didn't really get shown to me too much there in the Blue Ocean UI. So first of all, I see things like, well, I actually went to source control first to get the pipeline script and pull it out of GitHub, because when I defined the pipeline, I told it this was in GitHub, it was this organization, this repository, it knows its name, Jenkins file, it can go grab that on its own. Now notice it does that before it pulls down any of the rest of the code. I'm not getting my entire code base, which if I've got a large application, that's a good thing, right? I'm getting the script first. Once I do that, a build workspace will be allocated on a node to run the entire build, the entire pipeline, right? Now, what node that was on was controlled by my agent directive that I put at the top of my script. Once it's allocated the build workspace, it's going to go ahead and pull out the application code out of source control, and then I'm going to execute my step, and you can see Hello World and all that kind of cool stuff. And lo and behold, hey, my first Jenkins pipeline build was a success. Woo-hoo. All right, so, let's take a little closer look at some of the interesting things that I can do in a pipeline. Simple things initially. This shows my absolute unfamiliarity with Python because I pulled in a Java example to show you. Sorry, that's limitation in my background. Here's an example of where I created a stage named it Document, and this again goes back to the fact I can name them whatever I want to, put them in whatever order. There are my steps. One of the steps I have is to set the working directory, where I expect to find the stuff I'm going to build, for instance. Once I've done that, I actually have a step where I can say to run this with a particular Maven configuration, and I can put in a few parameters to define what that configuration ought to look like, and then I go ahead and I run a shell step to go ahead and execute Maven and do whatever goal or phase that I want to do in my Maven build. So this quick example of how we could build, say, a documentation stage into our overall pipeline. Very simple kind of thing. Remember how I mentioned that we had, well, we didn't necessarily have control over the ordering of the stages, except in terms of how we put them into the script, but we do have some control over, for instance, whether they run or not. So here's a stage called Analyze where I don't really want to do this unless the build has been stable up to this point. So I can put in a when block inside my stage and have an expression that's going to check the current build status and say, okay, if you're, I'm going to execute this stage when the build is stable. Jenkins is smart enough to figure out, well, if the build isn't stable at this point, I skip all the steps in that stage. I skip the stage entirely. So rather than my having to code in some kind of an if statement around my stage, within the definition of my stage and the gate condition as to whether or not the stage executes. I have a similar kind of thing that I can do at the end of the builds, and this is outside of the stages section, this is a separate section on its own toward the end of the pipeline, where I can define things that should occur after the build completes. So I don't have to put in a bunch of if logic all over the place to figure out, okay, when is the build going to complete, because there are a variety of different ways it could come to completion, stable, unstable, or failed, right? This is going to make sure that regardless of how I exit the build, certain things are going to happen. And in this case, I have a block that says what to do in case this is unstable. I could have a separate block for failed, a separate block for succeeded. I could do different things in each of those cases if I chose to. Very simple way, and it's nice for Jenkins to do all the heavy lifting for me to say, okay, well I know that the build's done, let me run these steps. Sorry, yep. I already explained all that. Never mind. All right, now this is the one that I mentioned earlier when I talked about how we have, you know, control over where the build runs. And my initial example was I said agent any, right? So any available node, go ahead and run it there. And I talked about how we could actually do things like say, well only run it on things that have a certain label. That defines hey, these particular nodes have Mac OS. If I want to do an iOS build of some kind, I want to run on the Mac OS nodes, not try to run it on a Windows node and have it blow up out from underneath me. So that's one of the things I can do with an agent. But this is even more interesting. This is one where I can take a particular docker container definition and dynamically create a node based on that container definition to run my build. So I basically say hey use that image with those args, set up a new docker instance for me, run the build. So the question was in essence, where does it run on the Jenkins node? Notice now at the bottom where it says no example. That's because I haven't really tested this out yet. So I don't really know the answer to your question. I haven't really played around with this functionality. If you send me your contact information I will be glad to research that and get you an answer. Sorry about that. But yeah, I think there is somewhere in the Jenkins overall configuration, I think, where you define where essentially the docker server is that you're dealing with. So I think that's the answer to it, but I'm not real comfortable with that. So yeah, we talked about dynamically provisioning. So those are some of the cool things you can do, but in the end it all boils down to those steps that we saw in the stages. If I'm going to write a meaningful pipeline, I've got to have some meaningful steps that I can execute. So what kinds of steps can I take? Well the good news is that I'm not limited to things that docker itself has defined. Steps can be and are defined in plugins. And there have been a bunch of plugins to reflect, to support pipeline. And just to give you an example, and I'm not going to expect you to read through all of these, nor am I going to try to describe them all, but just to give you the flavor of what we're dealing with. A whole bunch and they do all kinds of different things. So there's a lot of cool stuff that you can do in your steps. But what do you do when your existing steps aren't enough? Is there anything you want to do that there's no plugin for or you want to do it a slightly different way? Well there are a variety of things you can do but probably the most interesting thing to look at is for you to actually write your own script to do the thing that you want to do. And you'll notice that in our pipeline here we actually have a step called script whose body then is a groovy script that's going to get executed. Fairly simple stuff here. Three times. No big deal. Think of it as, while this gives you a lot of power, we need to keep in mind that being good coders we want to make sure, we want to make sure we follow the dry principle. I'm assuming that people at a Python conference know what dry means but just in case you don't, don't repeat yourself. In other words, if you think about this and going back to one of our earlier questions, well, you know, can I build like multiple repositories all through one Jenkins file? Well the answer is really, well I have Jenkins files in each repository which means I potentially have lots of Jenkins files to deal with and if I inline my code, I've got lots of different places that I'm potentially maintaining that code and even if it's really great code and does really cool things that's a really bad idea. Well the people that produce Jenkins were smart enough to know that they gave us shared script libraries. And shared script libraries are pretty easy to use. The first thing to realize is that your shared script library is just like everything else that Jenkins deals with as code in some way, whether it's your application code or it's the Jenkins file itself, in other words it needs to be out in source control somewhere. So here we've actually created a scripts repository in the same GitHub organization. It wouldn't have to be there, it could be anywhere else I wanted to put it. But a couple things to notice about this. First of all, things like okay I've got a branch name of 1.0, why would I create a branch name of 1.0? Well, like any other code, it's quite likely that I may need to use different versions of this library. The way that Jenkins is going to deal with those versions is through the branches I define in source control. The other thing to be aware of is that there is a standard structure that Jenkins is expecting to find in this repository. Now it's a fairly simple kind of thing. First of all I've got a folder called resources that can be used for non-groovy files. Let's say I have a data file that one of the scripts relies on, that's where I'd put it. If I define groovy classes, so something that follows like a Java structure kind of thing, it's an actual class, it would go in the source folder. However, if I wanted to have some kind of a global script, I'd put it in the vars folder. For the most part, that's what I've found to be the most useful, not to say that you don't occasionally create groovy classes that you might use, but a lot of the work that you would expect to do in a script library is probably going to be in some kind of a global script. You're probably going to put it in a different folder. Just creating this isn't quite enough. Jenkins has to know about this. It has to know where to find your scripts. We have this configuration that we would do, and there are a couple of different levels at which I could put this. I can do this globally, or I can do it at a folder level. We haven't really talked about folders, but you've sort of seen one implicitly. That is that when we created our pipeline and we said I want it for the PyOhio Jenkins organization, and I picked a repository, all that kind of thing, Jenkins actually went ahead and created a folder for me called PyOhio Jenkins. So that all of the builds related to the repositories in that GitHub organization will appear in that folder. I can go and do this kind of a configuration at that level so that I could maybe say have one set of shared script libraries for one group of projects and a different set of libraries for, you know, a lot of flexibility there. Now I have to give my librarian name and we'll see why that's important here in a minute. I also probably, it's up to me, but I can define a default version and that goes back to what we just talked about in terms of that branch within source control, and this is what gets used. If I don't anywhere in my code say which version of the library to use, this is the one that's going to get used. And then kind of the most important thing is I've got to tell it where in source control to find stuff. So I'm going to say okay, GitHub and what organization and in the end what repository am I actually going to find my script code in. All right, so now that I've got all this set up, I want to use my library. Well, let's look at my Jenkins file where I'm going to use the library. First thing to note is I have this annotation up here where I reference the shared library by name. So when we said we had to give a name to our library, this is where it gets used in the at library annotation. Now I don't necessarily have to do the next thing, but it's probably not bad practice to do so and that's to actually call out which script I'm going or scripts I'm going to use through import statements. That helps to clarify in my Jenkins file exactly which scripts I'm depending on and so on. And then down here I'm actually calling my script as a step. So I'm within my stage and my steps section and then I have a step that's calling a script. So what does that script look like? Well, remember how we said that we had the var folder out in source control where we were going to put our global scripts. So I have a file called threechairs.groovy in var and my script was named threechairs. Okay, so that makes sense. Now probably the most important thing to notice here is that I have in here a method that I've called, that I've named call. And if you want to be able to use the script as a step you have to name the method that. Now keep in mind it can have lots of different signatures and we'll see that here in a second, but it's got to be named call. And then I can have groovy code to do pretty much whatever the heck I want to do. The question was are there any restrictions and examples given were calling URL or shelling out to the out to the shell. The answer is you can do all of those things that you mentioned. There probably are some restrictions and they tend to relate to if you were to try to do some things that touch the pipeline. Things that were like, you tried to put say declarative script in here, it's not going to work. If you were to try to put a stage in here which you can do in scripted pipeline, but if you tried to do it here it's not going to work. So there are some things that aren't going to work and there may be some other restrictions that I'm not aware of, but in general you can do all kinds of cool stuff. Alright, so that was a simple one. Let's look at something a little more advanced. Again we see our at library, our import but down here we see we're calling our repeat message script which we defined in our import, but we're calling it with parameters. Okay, so given that our call method was just call no parameters, how do we do that? Well in this case we've got our repeat message groovy where we expect it to be, we have our call method, but as I said the signature can be whatever we want it to be so we go out there and we define whatever parameters we expect to take in and then we use them wherever we want to use them so I use times to control my loop, I use message as the string that I'm going to echo. So a lot of flexibility there but it gets better. Suppose I want to do something that looks like this. Notice here is my script called parameter to repeat but the thing that gets repeated is actually a code block. How do I do that? Since I know I'm calling out to a script right? Got a code block I want to execute, how in the world do I pass that? Well the good news is that groovy provides me a type called closure that represents a code block. So I define a code block parameter of type closure on my call method and then I just say body and it executes whatever that code block is. A lot of flexibility. Cool stuff that's going on here. I would like to show you some of the things that you can do if you have really large kind of Jenkins installations where you have a lot of projects going on dealing with pipelines at scale and some of that used scripted pipeline instead of declarative pipeline I worked that into the other presentation I did but I had half an hour more to deal with things so sorry I can't show you all of that today. I will as I mentioned provide you the URL to where that presentation is so that if you're interested you can go look at that but what I can do in the time that we have remaining is look very quickly at something that's called multi branch support and this is another one of those aspects in which if you let Jenkins do what it wants to do you get some really cool things that it handles for you so let's go back to my example my hello world thing and yeah it's great that I had my master branch but in real life you know if people are using get or get hub they're going to be creating branches all over the place and I probably don't want to go in every time the developers create a new branch and have to change Jenkins to build that branch for them that would just be painful right so I've got my new branch called Pi Ohio sitting out there what do I have to do to get Jenkins to build that and the short answer is nothing now notice that in my you know my pipeline Pi Ohio Jenkins slash hello I've got a branches view out and notice also that it automatically picked up and built my Pi Ohio branch for now you might ask well how did it do that well what it does is it goes out and it scans source control looking or is it using web hooks it can use web hooks for the to be notified of a commit but I'm pretty sure it needs to pull or for the scanning that it's doing to look for new branches and just that's from having viewed the logs that are involved now to some degree that may be a limitation of the environment that we've set up because of how we're set up with there's no easy way to do web hooks into our Jenkins so we don't have that configured so I don't want to get too adamant about that but what I've seen is it goes out and it pulls and forget how that can actually be an issue because you have quotas for API calls exactly and so that you know you may actually get into situations where in essence your scanning gets paused until you know your quota refreshes so yeah that can be a bit of an issue there's another question yeah so the question was what happens if the Jenkins file changes on the PyOhio branch and the answer is it's really just like any other code change so it will recognize that as a commit that gets done to that branch that which will trigger a build on that branch and whatever the latest you know we saw in the console output first thing it does is it checks out PyOhio I mean the Jenkins file sorry out of source control it would do that it would pick up whatever the current configuration of the Jenkins file is and it would run that you know so for that branch whatever and so you actually could have I'm not saying this is a good idea but you could have different Jenkins files in each branch different contents in your Jenkins file in each branch now obviously you know you expect that to converge right you're not you're not really going to go on with that over a long period of time just to you know as you would merge any other code changes in you would expect to merge your Jenkins file code change in but yeah I mean you have a lot of flexibility there in terms of what happens. Good news is this was all for free now if you remember back when we defined the pipeline we actually had the option to say scan the or the github organization for any repository that has a Jenkins file combine those two features that have the ability to scan for any repository and the ability to automatically recognize new branches you have some pretty hands off kind of stuff that you can do that as you know as your code base grows there's a lot that Jenkins will do for you so you're minimizing the amount of administration you have to do now keep in mind this is all in blue ocean blue oceans taking the rosy view of the world if for instance there were certain branches that I didn't want to build for whatever reason I could go in in the standard UI standard Jenkins UI go in and put in regular expressions to limit which branches get built and all kinds of cool stuff like that but from a blue ocean perspective it's expecting to build everything that you can find I had the five minute notification about two minutes ago so I am going to ask if there are any other questions right so the question was about kind of comparing the Jenkins 1.0 way of doing things with this approach and sort of what the main benefits are yes I mean to some degree it's that hey it's going to go find it for you but in the long run it's that ability to kind of exist within that pipeline metaphor right that you want something that's going to exist in source control not externally that's going to define your build for you because that becomes every bit as much a part of your configuration as your code does or your infrastructure as a code or whatever else right and very often we used to have that problem that if we you know let's say we were going to try to take a build and put it on somebody else's environment well I had to go figure out how do I export the Jenkins configuration take that over import it in this is now out in source control that's a part of my application everything that happens is right there so that simplifies a lot for us and helps us stick to our standards a lot better I think we have time for one more question real quick whoever had it so the question was how does Jenkins pipeline as code compare to some other tools that have a similar concept sorry I haven't had experience with those so I really don't have a good understanding of how those compare to last second here if you do have more questions feel free to catch up with me my contact information was out there earlier you may well have forgotten that the thing not to forget is that URL up there the Pi Ohio Jenkins GitHub organization that has all of the examples that we went through I will take this presentation and put it out there as well there will be a resources repository out there by end of day tomorrow there will be a presentation as well as those other links and the link to the presentation I did back in May where there is some information about using Jenkins pipeline as code at scale well thanks very much I appreciate you guys sitting through this this late on a Sunday hopefully this is helpful to you feel free to reach out to me if you have any further questions