 So my watch says 8 a.m. Pacific Standard Time. It's time to get started with our Jenkins online meetup This meetup was really pushed for by our events officer Alyssa Tong So if you enjoy the content whether live or after the fact on YouTube, please make sure you thank her for organizing today we have a few Sessions that were presented in the Jenkins project booth at Jenkins World earlier this year We have a few contributors from across the spectrum in the Jenkins project including Mark Waite who you most certainly Have appreciated his work as he's the maintainer of the get plug-in. We have Keith Zantow who works on the Blue Ocean project Andrew Bayer who's been a longtime community member former board member Grumpy old man who works on declarative pipeline Myself and Tyler Croy work on some of the Jenkins project infrastructure and do a lot of work with pipeline We have Jesse as well who is the maintainer of the pipeline plug-in Including a lot of other great tools that are part of the Jenkins I'd say the modern Jenkins Ecosystem and then Liam Newman who works with me at CloudBees as a technical evangelist working a lot on pipeline as well And then finally we'll close out with Alex Samai who was one of the Google Summer of Code students that we hosted this year for the Google Summer of Code 2016 Who maintains and created the external workspace manager plug-in? But in general we've got a lot of good content I'm sure we'll post the slides after the fact to the meetup page and Let's go ahead and get started with Mark if you've got your screen presented. We can get started with that Yeah, thanks Tyler So what you should see is Jenkins hints for large get repositories This is this is really a story about something you wish it didn't have to do Large get repositories are pretty commonly a sign of get misuse So get really isn't at its outset initially designed to do tracking of large binaries They get stored in the repository. They just sit there forever Admittedly there are some cases where a large get repo isn't a sign of misuse The Linux kernel for instance is really large. It's almost a gigabyte or it's over a gigabyte But the general case if you've got a large get repo your first question should be why and what can I do to prevent that? However, get is able to handle large repos. It just has some Cases where it's a little slower because they're large now the reality though is Misusing it or not. It doesn't matter. We have to get our jobs done So the example in my case is at my work. I have a 20 gigabyte get repository Yes, I wish I didn't have it but it's business critical We have to deal with it the history and the reasons for that repository are long and varied and in fact irrelevant It just is a case where I have to deal with and others users are like me They have to deal with large get repositories There are some techniques you can use to make your life a little easier in dealing with these monster repositories So one first guideline is use command line get Command line get is more careful about managing its memory than the Java implementation in J get J get has certainly improved over time. It is viable now for large repositories Command line get is just does a better job of it The other thing you can do is be mindful of data transfer and disk space choices So in terms of reducing data transfer You can use shallow clone. You cannot fetch tags. You can use narrower or more specific ref specs in Sparing disk space you can use a reference repository shallow clone will also share save some disk space In working directories, you can use sparse checkout as a way to reduce your size or you can use pipeline stash and unstash Each of those techniques are well supported inside the get plugin and can give you some help Let's look at each of them individually So a reference repository is a bear get repository that can be used as a destination for pointers from other repositories and All you do is you open up the advanced clone behavior section of the get plugin configuration on your job and On the line that says path of the reference repository you enter the absolute path to that reference repository Works on Windows works on Linux It'll set up pointers so that instead of every clone taking its own dedicated copy for the entire history All of the history points back to a single copy on your disk If new changes arrive, they'll be brought into the working repository Works really well has worked for years really well. So reference repository is a good way to save some space shallow clone Allows you to limit the amount of history you bring into the current repository that limitation on the total amount of history is Done by it asking the remote server for only a certain number of commits the number you specified and Again, that's an advanced clone behavior You check the shallow clone checkbox and then you tell it the depth Choose small numbers there if you start getting into double digits for the clone depth Its utility drops Significantly because it starts copying more and more data If you need the full history just get the full history and use a reference repository If you don't need full history shallow clone is a good way to reduce some of your disk space use Now if you're running an old operating system for instance red hat 7 or An old git version red hat 7 centos 6 Susie where they've got older older git implementations You may have to update your git version Your command line git version for support of push from a shallow clone Most of the git versions out there now already support pull push was only added and get 1.9 Another way to save space don't fetch tags. Okay tags create references and When those references are needed the reference data will be copied So if you uncheck the check bar check the checkbox, which says do not fetch tags You can reduce the amount of data you transfer from your git server to your working directory Another way to reduce space is Narrow the ref specs now. This is this is a little more complicated You'll need to read the online help that's available next to the ref spec field It gives you some suggestions the typical technique is If you're working with a single branch You modify your ref spec so that it only asks for the references for that single branch That will then let the git server only send you the references for things that are in the history of that branch Now in order to keep compatibility With previous behaviors you have to choose that checkbox honor ref spec on initial clone that checkbox is Intentionally unchecked by default because there are some use cases in the git plugins general use model Where honoring the ref spec on initial clone will break the use case So it's you've got to make the explicit choice to say I'm going to narrow what I want to use What I want to retrieve and then you check the checkbox on a ref spec on initial clone so Other other ways to reduce size of your repository if you use sparse check out This is for those cases where you have a large repository But your operation you're going to perform in your Jenkins job Only needs a subset of the working tree So in my case for instance if I only need the src directory and none of the other directories I can do a sparse check out of the path src and ignore everybody else's directories that are in my repository If if you're stuck with a repository that has a bunch of big binaries Hiding in a bin directory you can exclude that or rather list that as a not for that that file that directory with Those sparse checkouts you get the tree and you can even Exclude and include on a per file basis Now another technique for small chunks of data is you could use pipelines stash and unstash capability if you're if you're working with a large repository and you need a tiny piece of it on many different agents you could check out the large repository and Stash the little subtree you need in your pipeline job and then un-stash it elsewhere That saves you the time of doing a full clone on to every one of your agents Now we should warn you there that that does require that some data transfer goes through the master node So you're trading trading Performance by bringing data through the master node be sure that you're not Overwhelming your master node with what you're copying Now the other topic we had in addition to large get repos was as of get plug-in 3.0 We added sub module authentication so For for a number of years. We'd had requests. Please give us the ability when working with get sub modules to actually authenticate access to the sub module repositories and As of about September of this year sub module authentication was added in get plug-in 3.0 and get client plug-in 2.0 It does require that you use the same protocol HTTP or SSH for both the repository and all its sub modules and then it uses that as some exact same Credentials for the repository and for its sub modules The idea was to keep the user interface simple Allow users to do what they need to do with sub modules and Still give us what we need for a simple maintainable plug-in That's really what I had to had to talk about today questions from the hangout So as There's no questions great Then I propose Alyssa. Let's go on to the next group. Let's go on to the next presenter Great Thanks mark Some you know great stuff to know for sure what I'm going to talk about today is is blue ocean And I think a lot of people are familiar with it at this point But you know just to kind of recap a bit what we're you know what we're doing So, you know, my name is Keith Zanto and I'm on the blue ocean team and you know I've been involved with this project for for some time helping Define what blue ocean is and and also implementing it So so just a quick recap and then I'm gonna show a demo as well, but Why blue ocean? I mean, you know Jenkins UI traditionally has has not always been the nicest to use And and blue ocean is is something that's it's not really just Making the existing Jenkins UI prettier. It's not really, you know, just just trying to to use what's there, but it's really for rethinking how Developers can use Jenkins to more effectively You know do their jobs And especially, you know as it pertains to to see I and CD how We can expose information, you know to to make things much easier to use and you know, additionally how We can make Jenkins align with modern workflows. So, you know things like using get using branches and A modern workflow, you know, let's say that's using pull requests for example and all this stuff is really enabled by the feature set that the pipeline brings but Really because of that, you know blue ocean is is here to kind of bring pipeline as a first-class citizen in Jenkins and You know more importantly make Jenkins a pleasure to use You know whether it's pipeline or or anything else really that the main thing, you know Is to make Jenkins as easy as possible for people to get their jobs done So some of the things that blue ocean has and and some of these are you know The things that that may have existed in Jenkins in one form or another, but but these are you know kind of the foundations for how a blue ocean aims to to make things better one of the things is personalization and You know what that is is basically bringing to the forefront of Jenkins, you know that the things that are pertinent to you so What's in blue ocean today is a way of favoring items of interest Which basically gives you a very easy way to To build your own dashboard for the the jobs and in particular the branches and pull requests that you might be interested in and seeing It also gives you very quick access to the build results. So this is one of the things that that I was Kind of confused about why it took longer than one click to get to a failure and so that's you know one of the things that The blue ocean especially with the personalization makes very easy to do So, you know, the other thing is that that we do have the pipeline feature set and You know, it's it supports a bunch of different things including parallel branches and Stages and things and the traditional Jenkins UI was difficult to visualize What the pipeline actually looked like especially in you know in terms of parallel branches and things so that was, you know, one of the things that's most important, you know, both during execution and After a run is completed, you know showing how the execution happened, you know across parallel branches As well as showing Specifically at which stages fail and that makes it very easy to navigate logs You know, and again, this is you know, if you've used Jenkins quite a bit You know that that's one of the most important things to be able to do in order to figure out what's going on and you know with with pipeline You know, sometimes logs are coming from different places and it's just a way of making those easy to get to Additionally, what we've been working on is a pipeline editor and this is Going to be coupled Very closely with the declarative pipeline that I believe Andrew is going to talk about later But the idea here is to lower the barrier of entry for people using pipeline So, you know, even if you don't necessarily want to go and write scripts, you still can use the features the resiliency and other things, you know with with pipeline and The idea is to to support Adding pipelines, you know as well as as Jenkins sort of supports freestyle builds and and everything else that you know came before And just really quick. So there's some ways you can get more information on this. You can try out the beta of blue ocean It's in the update center now. There are people using it and reporting issues and of course, it's it's The way to do that is just the normal Jenkins issue trackers. There's a blue ocean component But also we're using it ourselves to build blue ocean So we're we're you know eating our own dog food so speak and you can certainly go and check it out there and a couple of different repositories of You know, you might be interested in if if you're a developer or whatnot of the blue ocean plug-in plug-in and the pipeline editor So What does blue ocean look like? So here is kind of the basic Blue ocean UI so you'll see I have a kind of bare bones Jenkins instance here that has a few different Different things it's got a freestyle project It's got a project that has a bunch of pull requests that that we use to kind of show how the failures and different things work and another multi-rench demo which which uses Docker to build things so You know as I navigate this this UI, it's it's you know, it's it corresponds to the traditional UI pretty much that you know the same way there's a folder structure, but What blue ocean does is is it a little different which is to bring Forward again the things that are or sort of pertinent to modern development So things like branches and things like pull requests are made into first-class citizens So you'll see if I've got this, you know, this github managed Multi-branch project. I can go and look at the pull requests and the statuses. They are directly Or I can go look at the branches as well and you'll see that there's a you know series of things that pass and fail and and that's sort of intentional on this particular project and Another thing that blue ocean does is make Things as live as possible. So if somebody pushes a pull request or pushes a new branch or you know a change You know, that's gonna go and get built automatically But blue ocean is gonna bring that right into the UI in a live manner. So what I'm gonna do is just go make a quick change to this Master branch on this is project here. So give me a second to pull up my eclipse. So I've got some different Failure projects here. I'm gonna go ahead and ignore this one and save that and Commit and push this one and I know you're all laughing at me for using eclipse, but that's just what I do So this is set up to pole in a one-minute increment, it's not set up with the With the pull request or push notifications just because it's a local server, but we'll give that a minute to do its thing Meanwhile, I'm just gonna go ahead and navigate around. So you'll see that Basically the way that this is set up is with blue ocean you get a pipeline visualization up in the top And you can click down to each of the stages now things are in parallel They'll they'll appear, you know up and down and I'll show you that in just a second But you can also click to go to each of the logs Immediately now this particular one didn't actually fail in the middle of a step. So let's let's see All right, and you'll see that just navigating around. I actually got got things building So, you know, again, I didn't really fix all my tests the way that The way that I should have but you know, if you look here, let's say we've got you know four failures and three skipped and I think I ignored one so Well, it looks about the same But that's kind of the the gist of you know, how things will automatically update on your screen There's some test notifications that you'll see various times So that's one of the things like I said now the other thing is if you're interested in branches Let's say, you know, I'm working on this master branch I can just go ahead and favorite that and what that does is back on your dashboard It'll put a card up on the top which lets you very easily, you know, run a build rerun a build and go directly to the the set of changes go directly to the To the the run and the latest run and so, you know, in this case is just what I committed in there And that's sort of what we see there Now the thing about this is that it does make it very easy for you to build a dashboard for yourself So if I wanted to see favorites Some different things here. Maybe I'm interested in a couple of different branches Then, you know, again, it'll it makes it a very easy way for you to get, you know, whatever particular branches You're you're interested in right up there. Now if they're building, you know You'll see things basically running right here. And so you'll see you can go and go right to the run and And in this particular in a build it's building a parallel series of things following steps and you'll see that, you know, basically I've got one failure here on the the build osx and so that's, you know, something that That I can go and figure out how to fix that But the thing is that you can also go directly to the failure. So if you click on From the card you go directly to the the failblog and it automatically opens up you'll see The failure node which is incredibly useful to be able to to navigate and not have to dig down through things to to see them work So that's a quick overview of the the blue ocean UI And now there's one other thing that we're working on that I'd like to give a quick demo of which is The pipeline editor. Now this is not really fully functional yet, but you can kind of see the way that it That it works and So So basically what, um, you know what the way the pipeline editor works is that, you know, you can define parallel nodes across things. So And you know Basically what that's going to do is is much like the visualization shows run things in parallel before they, you know, proceed to the next series of steps and You can go and add various steps. So you want to allocate a workspace. Let's say You can add whatever sort of steps you need in here shuttle script, let's say and Um and basically build up, you know with the the various steps and You know the things that are all available to the pipeline scripts themselves It just puts a a nice way of visually representing those things and and easily making, you know editors that you can use to To edit those and So this is is you know, like I said something that's that's a work in progress a little bit. Um, but again, it's it's going to be Gonna be out there in the short sort of time frame. So, um That's all I have to to demo today. Um Are there any questions? Keith there is a couple of questions. Can you be okay? Yeah All right. So one question from daniel is um any plans to make re running a failed pipeline step available in jankins os s I really missed this feature from build pipeline plugin So the one thing that for blue ocean is that it's it's only going to support whatever is you know available in um In you know in in pipeline today. So um as far as I know, um, there's not a way to restart a A pipeline step in uh in os s jankins at the moment. Um And jesse might be able to to talk about that as well, but uh, yeah, I there's not really going to be anything that the blue ocean is going to do to You know to add functionality for that Um other questions Yeah, another question is if if I want to have a master and a slave and slave nodes as docker containers Where are the actual build tools located? That's a good question. Is that in regards to the um to the pipeline editor? Or is that in regards to just general jankins? Um, I have no idea if it's in regards to general jankins. Yeah, that's a separate question entirely. I think Right. Yeah, the the build tools are not in um in blue ocean at the moment. And so yeah, so I mean they're just in the normal Jenkins, you know Jenkins configuration system configuration. So Other questions, um, we'll take another one. So uh a question from tyler Is the pipeline editor something end users should be testing on yet, or is it still too experimental? It is not really ready to be tested on but um, but you know, you're more than welcome to To you know, check out the the project and build it yourself. It's something that's actively being worked on right now. So, um, you know Stay tuned So Keith, I lied. Um one more question. Um, is it possible to switch back and forth between blue ocean? And normal ui on a per user basis Uh, so right now blue ocean is just a particular url. Um, that's available in Jenkins. So Any user can You know use either one at at any time. There's not really anything that's making it take over the normal ui You'll see that that we've got a slash blue Uh, so if you look at the normal Jenkins, you have a if you install the blue ocean plugin, there's a button To open blue ocean and it just opens the blue ocean ui So there there's they're available independently Okay, thanks. Keith sure thing Thanks a lot Uh, I guess I'm next Thanks, Keith So i'm here to talk a bit about uh declarative pipelines, uh, which are actually tightly involved with the editor The editor actually generates declarative pipelines declarative is out now in beta the pipeline model definition plugin o71 works with core 271 or later full One over lease is planned for February but it does work. It's just that we don't guarantee we won't change syntax between now and that anyway, let's take a The the intent here is to have a structured way to define pipelines with more default simpler behavior the upcoming editor support and More early validation So if we take a look here at an example Uh Is that big enough or do you need me to Blow up the fonts Oh, um, yeah increase the font, please. Yep Uh, so here we've got a real world ish example, uh, where we're defining tools Where are, uh, the tools we need installed Where the job will run And then our stages in this case we only have one stage And then the steps that run inside that stage And finally what happens after the build runs Uh I'll get into most of this later, but let me just cover the the the stages right now So you can have as many stages as you want Uh all within the stages block Uh each stage has at a step section, which is uh arbitrary pipeline script There are some restrictions on the syntax you can use Uh so that it can conform to uh a syntax the editor can comprehend and uh generate But there's also an escape hatch, uh that you can use to do whatever pipeline script a script block Uh Normally the git checkout would happen automatically, uh from the same repo that the Jenkins file came from here We're building something from a different repo. So i'm explicitly doing a checkout I'm verifying that, uh The environment contains the Uh tools that I uh installed and i'm running moving clean verify, uh And then when my build is done, even if it's failed before it got to that point like due to a compilation error or something else I'm always, uh grabbing the jnet reports and archiving the artifacts. So this is a Small but practical, uh example Now, uh Let's look at agent in more detail So For agent, there's a is one of the required sections. You always need to have stages and you always need to have agent Agent any is the equivalent of run on any label Uh It means that all the subsequent stages unless overridden will run on the same agent in the same workspace Uh If we're going with agent none, which i'll talk about a little bit longer a little bit later, uh, this would fail because Can't run shell when it doesn't have an agent Uh So next Uh, so If you've got a situation where you are going to be running on different nodes, uh, and Or for whatever reason you want to control Uh The agent you're using, uh On a per-stage basis that's easy enough to do Can specify agent none at the top in which case anything, uh, that doesn't have an explicit agent label will our agent section will uh Not run on an agent and then here everything in this one stage will run on an agent with, uh The some label label on it. Uh, so Then there's other options for agent besides any none in label. Uh, right now we also have Docker, uh, which will Build this in uh, uh pull this image and use these, uh optional docker args and run the rest of the build within that, uh container You can also specify optionally the label, uh on which to run because it still has to run on an actual agent on the back end, uh, but By default it will be agent any and you can also configure a default, uh on both a global and per folder level In addition, we've got the ability to do the same kind of thing, but with docker files Uh, when you use docker file true, uh, it'll look for a docker file in the root of your sort source repository Build that docker file and then run the rest of the build inside, uh container from that image And if you Specify something other than true like here docker file dot alternate It will use that alternate file Instead of docker files so that you could have multiple files for different builds, etc and I looked at uh, I mentioned post earlier. So let's take a look at that in more detail uh post works both in At the top level in which case it's good stuff that's evaluated at the end of the entire build Or uh in an individual stage alongside the steps block Uh, in which case it it will be evaluated at the end of that stage Uh When we get to post it will evaluate the build status against the conditions That you've specified here. I've got always meaning always run success meaning only run up for successful and failure Strangely enough meaning only run over failed. There's also, uh unstable and changed If the build status has changed Currently implemented, but this is an extension point. So more can be added if the uh Build condition is satisfied. It will then run the block so you can do mail notifications here or Uh slack notifications or clean up if you've got failures or Deploy or whatever it is you need to do Um, and again, that's can run both post full build or post an individual stage Uh now looking at The ability to specify parameters Build triggers and job properties so Kind of it does what it says on the tin. It's up at the top level. Uh, you can specify the parameters with their default value description and name the Uh method name here is coming from the symbols that are on the Parameter definitions and so some of those are Uh changed in more recent cores, uh, but Uh, you'll get As I mentioned before we've got validation. So if you put in an invalid, uh Uh parameter type you'll end up getting an error saying hey, I don't know what that parameter type is Here are the possible parameter types. I'll show the validation Uh from the cli, uh later But it also runs at the beginning of the build before it Actually executes anything For triggers, it's a similar thing. Uh Uh Right now cron and upstream are the only ones that work on the Uh On versions of Jenkins earlier than 2.22. Uh after that pola cm also works And job properties, uh, which these two are probably the ones you're going to run into most often build discarder, which is uh What controls rotating the the builds to keep only so many or for only so long And disabling concurrent builds Um, and again, this is all gets validated as I will Uh get into it a bit later Uh something that, uh Is noteworthy is that yes, you can use parallel within your stages, uh, however You can only do if you have a parallel in your steps block You can't have anything outside the parallel in that steps block Uh So a stage can either be a one list of steps, uh, or a parallel block Uh, we got some ideas, uh that may come in the future that uh Could add some other interesting possibilities in terms of parallelizing across stages, but we're not there yet Uh And the syntax for parallels basically the same as it is, uh anywhere else you've seen it And oh god that indentation is terrible uh Let's look at environment, um, which is well something again, you can specify at the top level Uh and at the per stage level Uh, in which case it would either it would override anything that's specified earlier So here we've got especially I'm finding environment variable foo In this first run Foo would end up being bas, but in the second stage it would end up being bar because we overwrote it uh there's uh Some other magic with environment and credentials that uh, I'm not going to bother getting into now, but it's it's nice uh Another interesting use case you may run into is that you don't want to run A stage unless a condition is met and right now what our solution is for that is when Which takes a block of a pipeline script that should uh return a boolean either true or false Uh, and so if it uh returns true it will continue Uh and run the contents of that stage if it returns false that stage this contents will not be executed we're Still working on when it it on the exact syntax. Uh, it's already out there in the wild, but We're getting feedback and getting a better sense of what might need to be tweaked there. So Andrew yep, um, there's a question that came in for you. Um And the question is our post build cases run in a fixed order or in the order they are defined They are run in a fixed order. Uh always is evaluated first Then uh Changed and then Uh the actual build statuses and since it can only be one of success failure or unstable The order does not actually uh I forget what the order is among those three in terms of the evaluation But it doesn't matter since only one of them can be true Well, you can go go on that's the only question that uh, let me see. Hold on other questions Oh, how is when stage displayed in the ui as skipped? Uh, that's something that is yet to land in uh, blue ocean but should Uh Pretty soon. Uh, it was waiting on the o.7 Uh declarative release But uh, blue ocean will be visualizing Skipped to due to failure and skipped due to when because we actually still technically execute all the stages so that we have consistent graph But it will properly visualize those in a distinctive manner But I I don't know exactly what that'll look like because I haven't actually looked at that ui But it's not in blue ocean quite yet will be soon Uh Realized that I probably should also mention that you actually can explicitly supply the label that you want to run on So here we're again overriding the Uh top level agent and saying hey agent label and run on some arbitrary label And uh tools which I mentioned before so Uh, again, this does not work with docker because uh the jenkins tool installations Does not work with uh the within docker stuff. So this will only work when you're running on a bear agent, but Here I can specify the Name of one of the tool types Uh Again, if you put it in a valid one, you'll get a validation error saying here are the available ones then I put the Version string we've configured when we set up maven In the jenkins global tools configuration and it will now automatically install maven and Put it in the path And now let me switch which i'm sharing Let me show you What the validation looks like a bit so This actually just landed right thing uh I and uh You'll get the similar output when you're build if you build fails Due to a validation error, but you can do this ahead of time using the This the shcli so here. Let's take a look at the first demo and make sure it actually is valid Cool, it's valid. That's good because if it wasn't I'd be really disturbed Now if we look at uh Andrew, yeah, did you please increase the font there? How's that size? That's as big as I think I can get it Um, I'm not able to see anything Oh Yeah, I think that's better Yeah, that looks better. Okay. Give me a second here. I had to shrink my Window, okay, there we go. Okay. Is that all right? Can y'all see this? Yeah, it's much better all right, uh, so now let's take a look at, uh What happens when there's a failure? so this first case, uh which, uh tells us that, uh we're Using this one in here. We've got uh, we're trying to run the timeout step but, uh, we didn't supply a Number four time we supplied a string which is pretty obviously wrong so When we ran the validation we got an error that said The line where the error was the column and Why it was wrong that we were expecting the int for that parameter But instead no gibberish uh Next we've got A case where this timeout step Should require parameters, but we don't have any specified. Let's see. What error we get there aha We're gonna note that it's missing a required parameter Next Uh, as I mentioned, you were required to have the stages block and the agent you need those two all the other Sections are optional, but so what happens if you don't have the agent specified. Hi, it tells us what we did wrong. Yay And let's see here the last one is that we also do require that you actually have stages in your stages so, uh, we'll just see what error that spits out That was we have got no stages specified now. There's obviously a lot more errors That can be reported that i'm not showing here because well time But we work hard to make sure that We've got useful meaningful validation and that it catches as many errors before you run, uh Before the bill gets going rather than having to wait and discover Oh, I had a syntax error or the wrong parameter type on like Three levels down in the builds. I ran most of the build before I found that I should have had arguments where I didn't so that's something that That i'm really happy with and that i'm hoping will be helpful for you um All right, uh, any other questions? Doesn't look like it Thank you all very much. Uh, like I said, this is available now a pipeline model definition is the plugin name in, uh The update center it only works on core 271 or later It's bundled with blue ocean though. That is currently an older version and uh Yeah, i'm looking forward to your feedback and looking forward to seeing how you all end up using it when we go 1.0 in early february Thanks. All right. Thanks, uh, um, I'm gonna Do a couple of presentations now because I volunteered at Jenkins world to do two demos So i'm going to do two demos back to back and the first thing that I I wanted to talk about is using um using docker within pipeline which You know when I first started uh working with Jenkins pipeline I didn't quite believe as much of the docker hype Um, but as I've gotten more and more familiar with docker and Jenkins pipeline It's become uh an invaluable tool in just about every Jenkins instance that I use So first let's talk about docker. I'm not going to go into any of the the details about docker. There's infinity presentations on YouTube about what docker is why not here, etc. Etc. Etc. How to use it things like that But let's look at some useful docker images in the context of Jenkins One of the useful things about docker is you can use it to package up dependencies that are going to be consistent and immutable So for most projects that I might have I'm going to need, uh, you know the jdk so open jdk seven or eight I might need maven or golang ruby python, etc And pulling these images gives me an official, you know, well maintained release Of let's say python that is following the upstream project and is always going to be consistent So if I pull, you know golang 1.7 on 100 different machines, they're all going to be running the exact same code, which is very useful So let's use this in pipeline And the most contrived example that I can think of is building a simple java application for Pretty much the history of Jenkins project This has been the easiest demo to to talk about because Jenkins does a pretty good job of it If you were to start out with pipeline, let's say we're implementing a simple Jenkins file to just run and or to just build This contrived java application We would have a couple of steps That we would need to call and all of these are documented on Jenkins io But we'd have a node which is going to give us an executor and a workspace Check out scm is going to grab the appropriate version of the source tree You know, if you have a job that's uh, if you have a pipeline that's Using polverm scm or scm triggers This is going to make sure that this pipeline run has the right revision of source Your source tree And then we're just going to run maven And this is going to generate j unit reports and and actually do all the work You'll note that we're not implementing a lot of logic in the Jenkins file We're just Calling out to maven and then working with the results that And then of course aggregating our test reports with the j unit step will give us a nice reporting view And tell us which test failed as opposed to just a big red ball that says everything's broken So to implement this very simple sort of contrived pipeline you need a couple of system requirements You need a jdk on the node and of course if you're running Jenkins you're using at least a jvm It might not be the jvm version you need But you also need maven to exist on the path for that jenkins agent that's executing on the node And depending on what you're in this case maven Project described you might need other requirements for the build and the test execution So before you can even run this is jenkins pipeline you need to satisfy a couple of system requirements first So there's some tools that exist in Jenkins already that help make this a little bit easier and a little bit more automated And one is the tool installers and andrew alluded to this a little bit earlier um, but tool installers have been around in Jenkins for You know ages and ages and you can use those from within pipeline with this tool step And so here i'm assuming that i've got the jdk 8 tool and a maven 3 tool And i'm just pulling those into my environment And when i specify this cool stuff Jenkins is going to automatically set up jdk 8 and set up maven 3 on the node i'm executing And this gives me some good reuse. Um, it reduces the requirements that I have on a specific node Or a specific agent that's in my jenkins environment But it does require that tool installers are configured ahead of time by a jenkins administrator And they have to have the names of jdk 8 and maven 3 Tyler what's sort of a Soft requirement on top of that is that if i'm the developer authoring that Jenkins file I have to know all of the names of the tools that are available to me And that might not be something that I have access to view by default in Jenkins if it's particularly locked down And then every time that I create a new project where I iterate on my tools, let's say I upgrade from You know jdk 8 to jdk 9 and I want to build for that as well Those new tools require the Jenkins administrator to set those up on the on the environment so I can then use those and so this is Can you Can I what increase the font size even bigger than this? Yeah, I think uh when it comes across youtube live, it's um, it's not as big It's like almost as big as it gets Um make bigger Yeah, it's you're showing your whole screen. So it should be just showing this window It's better. It's better. Tyler It's much better. Okay. This just means I can't see anything else so The requirements for this, um, you know, this is a little bit better Because we can we don't have to set up the node or a new agent in our environment As as before But we still have to know some things ahead of time and we have to annoy the Jenkins administrator whenever we need new stuff And so we can refactor this a little bit more with docker and this example Jenkins file uses the Docker pipeline plugin Make sure I got the name correct And the docker pipeline plugin adds a new global variable into my pipeline scope So before where I had node I had check out that was all great Now I can just use docker dot image and I can specify an image name and a tag And this is some some docker technical details that are worth looking up separate from this But we'll grab a tag where for the maven image has jdk8 and is running maven 3 And this gives us an image object And on that I can call dot inside and pass it a block And when I pass it the block what the docker pipeline plugin is going to do is it's going to run my steps here in the context of a emerald docker container And so when this runs it's going to stand up a container and then in that container it will run maven clean install And at the end of this block once this terminates that container is going to be stopped Or yeah destroyed And then after that I can and run my usual jnet stuff Now there's some really interesting docker stuff that goes on behind the scenes One of the questions I see a lot of times in irc and elsewhere is What file system is being used or you know, what files? How do I get files from inside the can container to outside the container? By default when docker image inside runs, it's actually going to map the current workspace To the working directory in the container So the reason I have j unit outside Of that that block is because this is going to generate reports and target your firework with blah blah blah But because we're mapping the directory into that container I can then access those same exact files because in essence it is the same effect. They are the same exact files Outside of the context of that docker image And this is very useful if you have You know specific dependencies that might generate a binary that you then want to work on outside of the context of the docker image And the requirements that we have in our Jenkins environment for this It's just that the node has a running docker daemon and The Jenkins agent or the user that the Jenkins agent is running or is running as Access and use that docker daemon And you can also do interesting things like You know connecting Jenkins to a docker swarm cluster all sorts of fun things But at the you know at the fundamental level this just requires A docker daemon and then as a developer creating a Jenkins file Whatever container that I need to build my project to build my java application I can go grab them from docker hub or I could build my own container and that could contain my build requirements So as a Jenkins administrator, this is Wonderful. I'm a very very big fan of functionality because I'm no longer having Different build requirements for different workloads in my Jenkins environment And that's good But that's just building and one of the nice things that Jenkins pipeline is it makes it a lot easier to describe and model your build test and deploy pipeline So let's talk about testing and and using docker There's some useful docker images that that come into play with testing for most of the projects that I've worked on They use, you know, some data store or some, you know, something external from a web application to store data And when I need to run my acceptance tests, whether that's selenium or, you know, some functional tests with rails or something like that It's preferable to have an actual live version of redis or pro running in order to to run my tests So using that in pipeline like with our same contrived simple now The like dumbest possible way to do this I need redis to run my test would be to manually run the redis server and Don't ever do this, please This is a bad way to do it But I'm showing it to sort of demonstrate a point There's been a lot of different ways to use like supervisor or process control daemons from within Jenkins to stand up services for acceptance testing and then tear them down and things like that But in this case, I'm just expecting in my maven test target that a redis server is running and then running my test And this has a lot of uh system requirements as you might expect. It's the most painful It requires that redis is installed on the node And one of the the sort of real world implications of this is as you grow a Jenkins infrastructure If you have two teams that need two different versions of a data store and in my case At my previous job We needed some machines to have one version of my SQL and other machines to have another version of my SQL installed and so you have to sort of Change your Jenkins environment to handle these application version dependencies and then you've got to manage the upgrades of them and labeling and all of these things and It can be very difficult to maintain it and to grow with So this is where docker in pipeline comes in and it becomes really really useful Building on the example before we're going to reuse the maven container And we're still using the docker pipeline plugin But this time we're using the dot with run method And when I use with run That gives me a block and inside of this block Essentially redis is running in the background And then when we get to the end of this with run block that redis container will be terminated And so i'm i'm doing a little bit of a Clever docker usage here. I'm linking my inner container With my redis container to make sure that I can use it But this gives me a way to as a developer specify The container that I need that will have my data store technology, whether it's redis or memcache or my sql Sort of run it out in the background and then do whatever acceptance testing I need to do In the context of this with run block And the requirements for this approach are the same as before with with docker for the build the build stage It's just an agent has to have a docker daemon running and the Jenkins the Jenkins user has to have access to it So instead of me as a Jenkins administrator Dictating what versions of applications are available for running Acceptance tests or for for doing more complex testing The developer can just say Whatever service I need to run in the background I'm just going to run that in a container with with run and then I can do whatever I need At the end of it I can just destroy it And if you can imagine extrapolating a little bit further on this If you build your own containers, you can also build your own containers to start with the appropriate test data Or the appropriate schemas are ready to find and really optimize the speed that you do your At which you can do your acceptance testing And I think that's pretty good And now the the sort of last segment of our continuous delivery pipeline that we're implementing with docker Is the delivery part Delivery gets to be very Organization or person specific So let's just say for arguments that you are able to use docker as our delivery So there's some useful docker images that come in here You know these would be useful images that you create They would be you know your application your back end application your front end application, whatever And we can start to use jinkins pipeline to build and deploy these applications So let's go back to our contrived java application The sort of original version of our release might look something like this and The real magic happens in triggered production deploy dot sh This is kind of assuming that we build some artifacts and then we have some script that ships those artifacts off to wherever they need to go Most deployment scripts that I've ever seen follow the sort of pattern You something builds a thing and then we have some bash scripting or ruby scripting our Capistrano fabric or whatever Um to actually execute the deployment There's a a couple of problems with this. Um the The destination environment that you're going into you sort of have to know your dependencies ahead of time And that's you know to run my jk or jvm based application I need to have the jvm on the the target machine or machines I might need ruby to be installed. I might need python to be installed, etc, etc So, um And there's lots of tools that help you manage this, you know puppet chef, etc But you have to through me these infrastructures in parallel might have your application Which needs, you know ruby 2 3 and all of the jvm and and these other things And that it needs that to run And then you might have in a separate repository the puppet or chef to make sure your production machines have in in place um And because we're using just something sort of dirt being shell scripting the actual orchestration and the actual doing of things for that deployment are happening kind of outside of the scope of jankins Which reduces the utility of jankins as this sort of hub for the continuous delivery pipeline that dashboard that everybody looks at for where code is in the in the delivery pipeline So We can refactor this a little bit and we'll use the docker pipeline plugin again Of course because that's kind of the theme of the presentation here um, and this assumes that there's a docker file in our source tree And using the docker pipeline plugin we're building, you know We're giving it the tag of whatever our build id is And then we're going to push it to docker help And this is is fairly simple. Um The benefit of of this approach is that When I build this container I can then use this container Just like I was using it in the previous stages the the test stage and things like that And whatever container I build It's immutable So if I run my test against it and then ship that container to production whether that's using You know kubernetes or swarm or elastic container service that same container is going to be running the exact same way Because it's an immutable artifact that we can then work with And so if I refactor this a little bit more I'm actually going to build my container I'm going to push a version of that so after you know this line image not push Dr. Hub or my configured registry is going to have You know in a tech slash all one Has that version I can then use the same tooling that I had before for run tests against that built container So instead of saying I built an artifact Let's you know yolo deploy it to production. I can then I can hear Run the container and test that it actually does what I think it should do Whether this is using server spec or in spec or some other test suite to make sure that the application is behaving as it's supposed to behave in production And then I can push that container as a new tag And in this case, I'm just using the conventional latest tag Which is very common in docker hub And then triggering my production deploy script And instead of what I might have before where production deploy does a whole bunch of Nonsense orchestrating puppet or you know driving a whole bunch of stuff All this needs to do is say, you know go talk to to docker swarm and deploy the latest tag Of my in a tech app because I've I've tested it and it looks good. And so everything in production should be using that And that sort of wraps up the Continuous delivery pipeline using some of the docker pipeline stuff and Jenkins And Are there any other before I move to the uh the next presentation Alyssa? Is there anything else that I should uh answer? Um, no, I think you're good. You can move forward Not time for presentation two of two Uh Actually Tyler one question just came in Will Jenkins clean up images containers on the node? It will not And that's kind of the pain So alvin hong and I were actually talking about this in irc yesterday. I think um, there's ways Specify when you do a docker Image inside or with run you can pass flags that would be passed to a docker run command like dash dash rm Or dash dash no dash cache if you're if you're building a um, and so one of the things that I've started to do is Make my Jenkins files a little bit more Well behaved so they're cleaning up after themselves But you can't guarantee that every pipeline in your infrastructure is going to be well behaved So having some utility jobs that run every every day or every week that go through agents and clean up Cashed images or are unnecessary Can be useful If you're using an ephemeral Agent so let's say you're using the azure vm agents plugin or the ec2 plugin or whatever to provision a machine on demand Then you you just don't have that problem because the vm would get torn down after some period of time anyways But right now you're responsible for cleaning up the containers that your pipelines might leak running agents Tyler so another question another question is If I want to have the jankin's master and slave nodes as tasks in a single docker swarm service Where are the actual build whose tools located? So if you're you're deploying out to something like docker swarm, um If you look at the docker plugin wiki page, um For running a jinkins as a container so a containerized agent There's some runtime requirements that you would need to To satisfy So you actually have to run the agent's code in that container before you can do any real work Um, so the docker plugin wiki page describes some of that a little bit And it has a docker file that you can copy and build off of But any other build tools either you would have to Build them into the container that you're creating So let's say you're creating my custom jankin agent container You would either want to bait things into that Or use the tool installers which has some caveats associated with it, of course, or use those tool installers Within your pipelines But the downside of running and so for the jankin's project infrastructure, for example We don't run containerized agents. We run containerized masters the downside of having containerized agents is It can be Difficult or a little bit hackish to try to get custom containers To be run the same way that I was demonstrating here for different dependencies There are ways that you can hack around that and it kind of involves mapping the docker socket into your containers So you can provision, you know These containers outside of your containerized agent Um, but I think I think honestly you're you're better off having Different classes of agents in your infrastructure some that are running on VMs and some that are running in containers So if you have a label that's maven in your Environment and that's just running containerized agents that are the maven container Then those can those can run those maven builds really quickly and fast Um, but if you need to run custom containers or enable developers to use custom containers Personally, I think it's better to just That has a docker Damon running on That's it. Uh, Tyler no more questions all right, so one of the other Things that I find people asking about or is a pretty common discussion topic in the jankin's channel or in in the jankin's user's mailing list Is going from quote-unquote freestyle pipelines to real jankin's pipelines And so in this in this little presentation, I wanted to highlight Some patterns that I think work well for moving from the old way of of defining You know a build test deploy pipeline in jankin's to using actual jankin's pipeline So the problem and this is something that anybody who's been using jankin's for for complex Problems is probably already aware of In jankin's let's say 1x if you wanted to create a dependence or a You know pipeline for your project You are really creating a series of jobs and you might have a build job. You might have a test job You know some other a test job and then a deployment job and the way that you sort of cobbled these together was by Using the you know trigger build post-build action or saying, you know build this product build this job after this other one And you would have projects that sort of daisy chained on to each other And the problem with this is you don't have any sort of centralized view or overview of what's going on. There's some plugins and radiator type dashboards that help make sense of this, you know, daisy chained jobs But you still lose a lot of the benefit of a cohesive view for this entire pipeline You might have console output in build project in test project and deploy project That's all in different places And as you know when a developer or somebody else in the project wants to go see why something failed or where something failed They've got to spend a lot of time digging around in the jankin's ui to figure out where something went wrong And the solution, I mean obviously is to use jankin's pipeline, right? But jankin's pipeline provides a very useful step called build which is not often I think appreciated by people who are new to jankin's pipeline But build actually gives you the ability to trigger other projects in a jankin's in a jankin's instance And so they don't necessarily need to be freestyle jobs. They can also be other They can be other pipelines, but build sort of lets you Describe an overall orchestration pipeline So before where I might have these daisy chain projects together, let's say that's You know where I start that's that's time zero, right? If I wanted to migrate this build test deploy thing over to jankin's pipeline The first thing I think I Is create a orchestration pipeline? I'm not going to actually change any of the freestyle jobs that exist But I'm going to describe a jankin's funnel that is going to run Build project run test project and then deploy project with the Parameters that they need to sort of drive that existing freestyle pipeline And then I'm going to all of a sudden With one commit to the source tree, I have You know a version a single source of truth for the orchestration of this freestyle pipeline That I have up until now not had If you're using tools like job dsl or jankin's job builder, you might have a version of my clients With the jankin's file we can start to refactor and do some really interesting things and move away from series of daisy chain jobs So we commit that jankin's file and let's refactor again One of the things that I try to do um with almost all of my pipelines is Um, this syntax is actually wrong. They're supposed to be parentheses here um But I'll describe the stages and stages in jankin's pipeline aren't actually Going to change the runtime of this So it's the same exact behavior as before But by putting stages around these different parts I can start have that dashboard I can commit this and then use visualization like the pipeline stage view plugin Or I can use blue ocean and even though I still have my freestyle Job pipeline thing underneath I can use newer tools like blue ocean to start to to visualize and understand what's going on in that pipeline Over time I would hope if you're going from freestyle Jobs to pipeline you would want to start getting the Contents of those freestyle jobs into your pipeline And by defining stages in the way that we we had defined in in the previous couple slides What we can then do is take the build stage Time zero was just a build project freestyle job And we can take what was happening before in that in that build job And just put that script into Into jankin's pipeline here, and so i'm building on some of the doctor pipeline stuff. I talked about earlier And the nice thing about refactoring the stuff in the pipeline is we can start to use tools like stash That we don't really have an analog for I'd say freestyle quote-unquote pipelines A common pattern for freestyle jobs is to use the copy artifact plugin or something along those lines to Sort of shuffle artifacts between these different freestyle jobs that are proposing the pipeline But pipeline comes native with this stash and unstashed step Which allows us to save files from different parts of our jankin's pipeline for for reuse later And there's a couple other plugins you can use that will Allow you to move information and data around your pipeline in a way that you don't have With a freestyle, you know quote-unquote pipeline I mean one example that I don't think I actually have in this deck would be if i'm defining a variable here Or if I have you know a computed file I can read that into a variable and reuse later If I wanted to accomplish something similar with freestyle jobs I would have to make things highly parameterized between my freestyle jobs Which is a pattern. I'm sure most Well-travel jankin's administrators are familiar with but you sort of Gradually make these freestyle jobs almost turing complete with n You know n 100 parameters and not 100 but you know a number of parameters that Sort of curry data from one Part of the pipeline to the other but because we're in jankin's pipeline We have this whole context of the beginning to end We can just share variables and and and data between one stage and another So in my build I might stash the actual build art built artifacts Um, I'm archiving them this data out of habit I might want to grab them later outside of the context of the running pipeline And then my test Stage just needs to un-stash that and these can run on different agents in your jankin's environment So my build might be running on my um, you know one of my VMs the test stage doesn't necessarily need to run on that same VM. I can Stash that app code that was built and grab it back down when I need to run my tests And then I can also Refactor out the deploy stage to also un-stash that same code. Let's pretend this is running on a different node But I can un-stash that same code and then build my container and deploy my container And going through each of these different stages. I could still have my build stage implemented in pipeline I could have my test stage for at least revision one my test stage still calling the test project Um, and I can slowly over time refactor the contents of the build job The test job and the deploy job into these different stages in jankin's pipeline And then I can commit all of that. Um, and then I've got a single Single source of truth for the full pipeline and the really great thing about source of truth Also that anybody else in the project that comes in whether they're a new hire or moving from a different team It is self-evident what the build test and deploy pipeline looks like for this project In my own personal experience I've invariably ended up as the one person on the team that sort of knows how the jankin's pipeline was set up Uh, I just knew which jobs did what and how they triggered into each other And by committing the full pipeline into our source tree We now have record of when things why we changed thing And anybody else coming into the project can get started and make changes as necessary In a way that they couldn't before So that's a high-level overview of going from freestyle quote-a-quote pipelines to jankin's pipeline Alyssa, do you have any questions or should I tip the next speaker? We'll give it a minute in case some questions come in but there's been a lot of discussions on the IRC If nothing else I'm happy to be provocative I'm sure patrick is saying all of my suggestions are wrong don't do that. All right. I think we can set up the next I think we have jesse next yes, and Thank you for listening. Here is jesse All right, so today I would like to talk about libraries so in a lot of the examples that we've been seeing today the the jankin's file or the work the pipeline script that you're using is Basically only a few lines of codes. There's not really much to separate into different Different sections or to reuse between jobs or something like that But it can often happen that you have something that's pretty complicated that you might want to share across different jobs and you don't want to have to be copying and pasting big chunks of code between jobs if you're defining complicated functions now the example that that will be making use of this sort of follows roughly the lines of the kinds of things tyler was talking about in terms of uh going through some stages where we're doing a build and then we're Test against that application Which has been deployed to a temporary staging server Then we're finally deploying it to production so In this in this example, I have a bunch of places where I want to send something off to a server in this particular demos is actually running on a local installation to htp server and so They have here. I have this block that says with deployment. So i'm going to take a Take a temporary application id and I want to And then I want to do some stuff while that's deployed and here I have a particular Is going to refer to that temporary application And then as soon as the block exits, I want to tear the application down I'm using this in the section Of this of this block. So first Um, so I'm going to run two sets of tests in parallel. So we'll try running that If you see this Start to run so first it's doing some Some build parts and then We'll just switch to the textual view so you can see the the details that's doing So it's starting to run a build And Using the stash to see a temporary artifact Start running tests so the way it Is it's It's creating a temporary server id So this is a a temporary application if you actually go to this This url you can see that's actually deployed or application to a temporary url And some tests against it and as soon as these tests are done, it's going to tear that server down with it so in order to do this we want to have a way to abstract all of this This functionality about deploying stuff to servers and we also have a couple other places We're going to deploy to a staging server and deploy to a production server So the way that I've structured this is that I'm using a library In my pipeline script. So I I started off the pipeline script by declaring a library that I want to use I have a name for it and I can also import particular classes and things like that Here's some people saying that the audio is cutting out. Sorry about that This this library is defined in this folder Edge project. So it's all part of one folder and then the configuration screen There is a section along with everything else where you can define pipeline libraries are available inside So I may give a name for the server name for the library Call it servers Which I can use to refer to Blind script here Um, I say what version of a library I want to use because this is actually a separate Uh, get repository. So if we go If you go back here Um, if this is the repository with my actual project This is the repository That has my library in it So here I have a class that's defined in it This is in the master branch So you can see that there Is a version that we specify And we get some validation inside Jenkins to see if that's a real version or not We said yes Is a real commit And then we give connection information to here. I'm saying look it up with git. It's supposed to be loaded from this location I can specify credentials if I need to Kind of get behaviors that I want so when you Go to the build After it loads the Jenkins file Then it Goes and loads This library as well. So you can see it's checking out this revision from master Vision that we have up here And it's going to be made available throughout the build so my Scripts I can define some functions. I want to That I want to have available In the course of build so I can deploy an application I can undefy it And I define more complicated things in groovy. So here I can take a groovy closure, which is a block And I'll define a temporary name for an application deploy it run the contents of the block And then undefy the application at the end so we went through when we And we did a bunch of deployments and some tests And you see at the end of the block for each parallel branch It's I'm deploying that application for us automatically I am also using another library In this job So this time I'm configuring it from Managed Jenkins configure system and the global configuration um, this is library And these are automatically available to anything that's that's running in the system And here I'll give this one a different name This is also Coming from Separate git repository So this time it's coming from a global library In this case, um, not only is it available to anything running on the system, but it's been configured by an administrator, which means that we trust That This library can do anything. So it's not subject to the usual security restrictions of the actual code and jobs So we're taking advantage of that by making calls to Jenkins internal apis so here there's a There's a need in some cases to look at the results of The junit plugin to see what test results It recorded previously Um, and we can get access to that information from the Jenkins apis, but this isn't normally um accessible to Jenkins files because this this raw build is is denied for security reasons But from this global library, we can look at it and we can use any kind of groovy code we need To call those Jenkins apis This is a git pass tests method Um, so you can call those apis in In a simple form So from my Jenkins file Rebo I have record results and then I just have this This method available within my Jenkins file and it looks like It looks like it could be a step or something that's just built into groovy, but it's been defined for me by the Jenkins administrator You see that I've checked this box load implicitly So this means that I did not have to ask for this So the administrator said this library is just going to be predefined for everyone Running any pipeline job on the system Um, the administrator can also say that they're only allowed to use this the current master version if you Then they can pick a different version of the library if they want to If they want to pin their job to using a particular known good version of the library They're allowed to do that So when you when you run this then Then we get some information back about the Uh, the different test cases that passed or failed or skipped within this build So you can take further decisions based on those So instead of having the whole build stop as soon as you have a test failure You could check to see if it's more than 10 percent of the test failed then Do something if the normally would etc Uh, so that was a fairly simple example of a global library Um, that's doing some trusted code Another thing you can do is use this Uh, a grape system that's built into groovy and you can actually run Any kind of code that's out I'm using the evil maven job type Maven in a way, but so this Grape demo This is not included in the live demo, but this is something you can browse offline Here as part of a Part of a global library. I'm saying that we want to pick up The following artifact and this is a java library that you can get on The maven central repository. So when you run this code, it's actually going to download this jar file Onto your Jenkins master if it's not already there And make everything that's inside that available Um for use by this global library. So if for some reason It would be kind of strange, but if for some reason you wanted to be able to Check whether a number is primed inside Your build and you didn't want to have to write up from scratch. You can just use Um, a standard utility method for it and Here we wrap up So this Global library you can Loading let's see loading library test result at master Yeah, so even though I didn't explicitly ask for it in this particular job um And so this is going to automatically load this library in every pipeline job that's running on the system for me below And one other thing That is useful to look at is that you can Oh here, let's go and Get on that. Oh also just a quick note about the The global libraries is that if they're defining variables, they do Show up after the first build under global variable reference for that job You know, so you can see that we have this Test results variable or function that gets available and you can define some documentation with it this just comes out of Text file that you keep One other interesting thing to notice if you click the replay link You which is a pipeline feature for trying to build with a modified script You not only get to modify The jinkins file, but you get to modify any untrusted libraries that you loaded as well Um, I still haven't figured out a safe way to let you let only the right people replay trusted libraries So for now you can't do that But say I wanted to put some sort of prefix onto my applications Rather than just a uu id. I wanted to start with tmp And I can try making that change So let's We'll get a This this change in effect if actually Should only take a minute Let it go through the builds build part and then you see that's That it's going and It's putting this tmp prefix. Oops that's rolled up too much. It's putting this tmp prefix into my applications. So now I have a different kind of url for my temporary application And if all of that works and I like what I see I want to make this a permanent part of the library. I just need to go to the diff link and then this shows you You can apply Your library so that you can make this change permanent And try using it at other jobs as well it's that's the End of a fixed presentation, but I know you had a bunch of man on irc so So let's see what The first question to take would be The is asking whether everything still needs to be non cps or serializable. Yeah, so this is a sort of an advanced topic in terms of Libraries that Are really defining any kind of pipelines that deals with special values So any pipeline libraries by default run the same way as any other pipeline script. So Everything that they work with needs to be serializable. And if you pass in a reference to you know, let's go back to the Very if you pass in a reference to the script Then you can call pipeline steps from these functions by default So it's just like function You kept in your main shankens file um, but if you're That might deal with Unserializable objects like this raw build in this case or this Abstract test result action any of this stuff then you need to then just like in a pipeline script you need to encapsulate that in a safe block With this annotation. So the idea here is that this argument is serializable It's the current build global variable and this result is just a Map to a list of strings and so that's also safe to pass and Yeah, and martin da also asked about About things using grab. So it's the it's the same situation and in this case um And it's just uh Taking a taking an integer and returning a boolean. So it's no problem but if you needed to work with something that had some Realizable types in some libraries and yes, you would need to Wrap it in a non-cps block So that's a good reason why you would actually define a global library that uses grab is to Provide a pipeline safe version of some kind of functionality Any other questions that came in from this What's uh, um I don't think so Okay, sorry there was uh Oh, no, it wasn't earlier Okay, I think that's all thank you Hey, okay Thanks, jesse. Uh, sorry. I was just Everything went quiet. Oh, I am in the room. Okay, so uh I'm going to be talking about uh adding notifications to pipelines and So here we have my basic pipeline that I've set up against a project called herman Let's take a look at that real quick. Let me use the replay to show you what I already did Um, this is what I actually ran just a minute ago in pipeline. So here we have your your basic Properties to look wrote out the logs. I've added a stage called build check that a node Check that scm And run a few shell scripts Archive the result publish an html report. Ah, yes text bigger. I can do that Oh, geez really? Thanks about the pga got about as big as I can make it. So There you go This is my shell script window that I need to use to do a little bit of Checkout changes same thing. This is the the shell window that I can show you the code for so I'm going to move this along to Next step in my demo here And this is what we're going to add to our Pipeline to start sending notifications to slack hipchat and via email Now what I've done is created a method to And edit it here at the start So that I can see So I can actually just keep all of my notifications in one spot. I'm using the slacks and chap step the hipchats end and email xed And the way that I went and Figured these out seeing as I'm when I started when I was starting. I didn't know what these looked like was of course to use The snippet generator here. Sorry my Jenkins instance is a little slow There we go So for example, I had no x generated these and that'll help that's what I used to create these blocks I went through basically each of those The strings and then reformatted them if I felt like they need to be on multiple lines So before I started this I went through and added the pipeline the plugins for these particular notifications, of course And each of them has their own configuration I'm showing an example of here if you were doing this yourself you'd have your own smtp server your own hipchat Your own slack And these you would get to via the Jenkins Configure system page Scroll down and set all the values as you'd need to so let's go ahead and run this next step here exchange And what you'll see As this runs I've added a notification for when the job starts By the way, live demos are paying. Uh, I went through and I've done this Twice now and each time I've had to rewrite it because the things break between when you do them one time and another But here you are. Uh, you get your notifications in hipchat down here and then Also in slack Had them show up over there and I'm using a little local smtp server called mail catcher, which lets me Get mail as well So that's our job started Of course, I don't get job notifications when the job ends yet But that's what we'll do next Let's go over here Move us along to the next commit This is a now I've done a copying paste of what I had in there before Um And here's the pipeline with the notify successful method at the end Notify successful copy and paste and made a few changes once I did the copy and paste changed the color of the uh The output changed some of the text But otherwise just copy paste change Pretty straightforward still the same action But just at the end instead Let's go ahead and run that so now Now what we'll see as this spins up goes and Pulls the repository Here's some notifications coming up on the corner of my screen Build is pretty quick. So after a minute after a few seconds or so it'll run run through the tests And there's my successful notification Both of them one from slack one from hipchat and then also over here in my email Let's see started and successful um, and as part of this I've Be able to go in and add links to these So that for instance There you go This gave me the actual job. So when I get notified I actually get to be able to click on links to go to see the job Any questions so far? Great Move this along to the next step here Once again copy and paste is your friend Uh, I went ahead and created a notify failed method Same idea slight color changes light text changes And this one's just slightly more complex in adding it to the actual pipeline and that I need to do a try catch block around what I already had in there. So Here's my try I notify started notice that notify successful if I get to the end if at any point I fail I catch the exception and set the result to failed And then call notify failed So if I run this I will get No change to my behavior by default. Let's go ahead and do that though only takes a minute Yep, there we go more notifications Link to the started if I wanted to there's my notifications that it finished with success Great started and successful Everything's looking good Now I'm going to go through and make this job fail. I'm going to use the replay feature off the the job and add a step Right here That just exits one which will Throw an exception fail the job just the started great And there's the failed message. I can see the failure here I get a failed message in slack. I get a Hit chat being a little Slow to there we go. Here we go. They're refreshed failed message here failed email Fantastic and go and analyze this if I if it were natural failure So one thing about this is this has been a lot of fun, but Having three separate methods with basically the same Code in them is probably not where you want to end up So pipeline it's now just like any other code which you can refactor show you next so we went from This bunch of copy and paste code here Now I've created a notify build method which Goes in the same spots as the previous as Almost the same spots only now. I added also using the finally block no reason not to Standard and groovy here. So we have try notify build started And finally notify build and we're passing the result in there and now we have the notify build method which Does a bunch of processing based on what the What the build status is but he calls the same path through for all three Notifications so that make sure that I get this the same sort All three notifications no matter what the status is of the job job Let's take a look at that and that should behave This should behave no differently just as you would expect with very refactoring One thing to note here is that I'm running this Without the replay this time so that shell exit one that failed the previous job is not part of the checked-in code so We'll see a successful build. There's my started and success and just to be complete we can always go in We play this with again my custom change So of course run quite a bit faster because it's going to fail really quick started notifications And failure Right on it's heels now if I wanted to I can go look at that job Look at the console output. I'll look there. I got a failure any questions No question yet. All right uh I want to be the poke up bit here that in The declarative land, uh, you don't have to use try catch Uh instead you put stuff in post Right, so this will definitely change uh when you when we go to declarative You won't have to use these kind of control structures to to get the the this the nice behavior like you say You'll have a post build actions The post action the post section that We'll do all that though. It is worth mentioning that as of right now at least email ext does not work in the post build it'll work post stage Because it's running in an agent context then but not in uh, but in post build There's no agent since we want to be able to send you an error if there's an issue configuring the agent, uh, that's uh Getting better email notification tooling that is more flexible is on the roadmap Okay, I guess I should also note that um When I was writing this I remember one thing about this was there the I wondered why I needed to do the notifications inside the node and uh that the reason for that was because the the some of the features of uh the I think all three of these notifiers They all have the option to pull files from the uh context From the from the workspace and because of that they depend on a note on an on being inside the node Yeah, I I expect that to be a bit awkward for a bit uh in the post 1o time frame, but uh as we actually get this stuff used in the real world more and and have time to focus less on uh declarative itself and more on uh Things existing things working in declarative that we do have plans to get better Notification support for dealing with running outside of nodes. Uh, but we're not there yet The I guess the other point I'd add here would also be that this depth this notify build could be part of a shared library Like jesse was talking about So rather than having it sitting inside of your Jenkins file, you'd have it in a a shared library that then You can call from any of your pipelines Thank you very much Okay, so thanks Liam um Now i'm gonna I'm gonna demo the external workspace manager plugin So hello everyone. My name is alexander from I Um, so a few words about myself I'm a google summer of code student. I actually this summer I was a google summer of code student at jenkins project I have about three years experience in software development and I have a major in software engineering So one of the problems that some jenkins users are facing is that it is quite difficult to share and reuse the same workspace for multiple jobs For example when running parallel testing across nodes As we know some builds may have large files and copying them across nodes may prove to be slow So the solution that i'm proposing is the external workspace manager plugin It started as a google summer of code project where I had as mentors oleg and martin So the main focus is on pipeline jobs and it facilitates workspace share and reuse across multiple jobs that are running on different nodes So the concept is that in the build pipeline you can have multiple nodes Like one node for building one for testing one for staging and all these nodes Will be able to reuse the same workspace that's located within a disk pool But before actually using the plugin You will need to set up the infrastructure So from the jenkins master as well as from each node there has to be set mounting points to the actual disks And here are some basic features that the plugin has You can configure it. You can configure the mounting points in the jenkins global config and in the node configs And you are able to reuse the same workspace in in one pipeline job Or even in multiple pipeline jobs and it works on both unix and windows systems So i'm gonna show you the first the first demo part Um what i'm having here is the jenkins global config And we need to define the the disk pool with a unique id A display name a basic description Uh, then we have some optional parameters. Um, I'll I'll show you later what these are And then we need to add at least one disk entry Again for the disk entries we need to specify the disk id the display name The master mount point. Um, this is the mounting point from the jenkins master to the actual disk Then we have the physical pattern disk parameter This is an optional parameter. It is used for workspace path computation And the help file gives you more details about it And then you can have some optional disk information Okay, so this is about the global config Now let's look in the node configs So I have here one node labeled build a linux and in the configuration I have to reference the disk pool from the jenkins global config And again, I have to reference each disk entry and specify the node mount point This is the mounting point from the node to the actual disk Um, again I have for for the second node, which is labeled linux and test I have these configs Okay, now let's see the first, um pipeline example In this example, what I'm doing I'm reusing the same workspace in a single pipeline job And I'm running on on multiple nodes So on on one node on first node labeled build a linux. I'm Building the project, but I'm skipping the tests And then in um in a second node labeled test and linux I'm just running uh the tests So firstly what I'm doing I'm calling the xws allocate step This will allocate me a disk from the disk pool and on the disk it will allocate a workspace Then with the object return by by this step I'm gonna pass it as input parameter to the xws step within the node So by this I will be able to reuse the same workspace On multiple nodes in the same pipeline job. Let's save this and I'll trigger the build now Hey, it's building and let's look in the console output Some relevant messages saying like the selected disk ID from the disk pool and here we get The path that is computed on the disk Next when he's running in the first node, we get the path from the nodes The path from the nodes to the to the disk. It's the complete workspace path And again when when it's running Um on the second node We get the path the same workspace, but from the second node perspective Okay, so the build has succeeded Now if we look here on the left, we have the external workspaces view If I'll click on it We see some relevant relevant messages saying Which is the disk pool ID the disk ID the workspace path on the disk And if I click the workspace link, I can actually browse the contents of the workspace Okay, and The second example, I'm going to show you how to reuse the same workspace On on different on different pipeline jobs that are running on different nodes So what I'm having here is the upstream build So in the upstream In the upstream job There are no changes in the external workspace commands Simply at the end of the job. I'm triggering the downstream job And I'm providing a parameter of the build number With the value having as value the current build number Next let's have a look in the downstream job So um in the downstream Um Firstly, I'm calling the select run step. This is from the run selector plugin with the name of the upstream job and the selector In this case, I'm using the build number selector. This plugin has many other selectors Um, but what I'm doing now is that I want to select the build Identified by the build number parameter from the upstream job Then with the run wrapper object returned by the select run step will be passed in as parameter to the xws allocate step For the selected run parameter And by this I will be able to reuse the same workspace On two different pipeline jobs running on different nodes. So let's trigger the upstream job now Alex you may want to enlarge the font because it's coming across as quite blurry and small on uh youtube Oh, okay. Okay. Um, I I'll do that on on the next pipelines Yeah, maybe maybe like this Yeah, I think that helps all right So, um Um, I have triggered the upstream build Which in fact triggered the downstream job and If we look in the console output, uh, we can see that the build has succeeded And again on the left we have the external workspaces view And now what I want to show you is that we have the fingerprints link So, um, if I click on it, uh, we can see that this workspace was used in two different pipeline jobs On different builds. Um, and again, I can I can browse the the contents of the workspace Uh, okay, so this was the first demo part. Um But the plugin has some more advanced features Like workspace cleanup and the ability to provide custom workspace path So by default the workspace is computed based on the firmware starting with the mount point pattern this job name build number But this can be overridden either in the jinkins global config Or in the pipeline script So now I'm going to show you a demo for this Um regarding the workspace cleanup Um, I'll show you the pipeline example for this So there are no changes in the external workspace commands At the end of the build What I'm doing is I'm calling in the final step. I'm calling the ws cleanup step With the clean one failure parameter set to file. So this will, um, delete the workspace if the build succeeds So let's save this and trigger the build to see how it goes Uh, it's building now um So in the meantime, uh, we see again the relevant messages And at the end of the build um We get the message saying that deleting project workspace done So the project the workspace should be deleted and if we try to browse it Uh, we get the error message saying no workspace And uh, you can provide your own, um custom workspace um So this can be done either in the jinkins global config you can Overwrite the workspace path template and here you have a help file for this with more details But now I'm going to show you how to do it in the pipeline So in the pipeline, I'm saying that my my path on the disk I want to start with the job name pr number, which is a build parameter and then the build number So this path will be passed in as value to the path parameter Um, so let's save this and trigger the build Uh, let's say the pr number is one one or one build And we see that the path on the disk has the Uh, the template that we wanted starting with the name of the job the pr number and the current build number 22 Okay, so this was the second demo and some some even more features. Uh, you can Restrict the disk pools based on who started the build or the name of the the build and so on And you have some flexible disk allocation strategies. I'm gonna quickly show you these Um So what I'm doing for For this disk pool, I'm saying that I want to be restricted Only if the build is Just a second Something is not working pretty well Okay So I want to restrict this only if it was started by my user And uh, later I'm gonna need some basic disk information. I'll say that disk number one has a read speed of of 10 Um, and disk number two has a read speed of 15 So a higher a higher read speed Good, let's save this Now if I try to to run this This build for example, uh, I'm not logged in currently So most probably the build should fail because I don't have the rights to allocate that disk pool As we can see in the logs. Now if I'm gonna log in Um, most probably I'll be able to run it. But what what I want to show you first is how you can provide Disk allocation strategy So you can um to the xws allocate step you can provide the strategy parameter And it has some options like fastest read speed and fastest ride speed and also you have some Most usable space disk But for for the fastest read speed and write speed you need to Actually provide which are the read speeds and write speeds of the disks as I've shown you in the jinkins global config so now if I save this and trigger the build In the console output We we see the message that it says Using disk allocation strategy select the disk with the highest read speed And now it has selected disk number two because it has a higher read speed Um Okay, so these was my last demo So the plugin is available in the in the Jenkins update center I think that now it has the version 1.1.1 and More details you can find on the plugins read me page. Here is the documentation page actually And here you have explanations for all the features that I've shown you now With some examples and screenshots and so on So any feedback is welcome and you can provide it either on the gira or on the gitter chat Some useful links the link to the plugin repository the link to the gitter chat So If you'll have any other questions after after this demo, you can just join the gitter chat and Um Ask your questions over there or on the Jenkins IRC channel um, okay, so Do do we have any questions We'll give it a minute alex to see if um questions comes in Yeah, sure Yeah, so if people are not having questions right now Any time they can just join the the gitter chat the plugins gitter chat that Is is uh is on the slide. So, uh, yeah They can just ask me there. Whatever they they want about the plugin Oh, thanks alex Good, so I'm gonna go ahead and wrap up alex was our last presenter um, I want to thank all of our presenters who took some time out of their day to uh Present what they had given at jenkin's work for posterity This streaming session is going to be recorded under the jenkin ci youtube user Um, I also want to make sure that uh those that are watching the live stream that may not have come through the jenkin's online meetup on meetup We have this jenkin's online meetup group and we'll be posting Presentation pdfs and updates about this on the jenkin's blog But the jenkin's online meetup is also a great place to See some good content. We've had some really good presentations in the past around plugin development The summer of code presentations things like that and we'll try to keep doing this every month for every other month And you can join the jenkin's online meetup for that If you want to go meet people in person We also have jenkin's area meetups around the world So there is a high probability Especially if you're in this very dense european region that there is a local jenkin's area meetup Which i'm sure would be glad to have you in attendance or glad to have you present something that's interesting From your own usage of jenkin's And then of course, there's the jenkin's community blog. This is just on jenkin's io. We'll be posting the uh video for this slides probably Later in this week or early next week So following the blog or following twitter, which is jenkin's ci on twitter is a good way to stay up to date with the project If you have more questions about any of the content here or questions about pipeline blue ocean in general Or general, you know jenkin's usage questions. We have a couple of venues for that We have the jenkin ci users google group If you've got a question i strongly recommend searching the archive because your question may have already been answered If you're interested in extending jenkin's core or developing plugins the jenkin ci gabless A good list for you to participate in And there's a lot of other disciplines in other languages that you can uh participate in If you're interested in a little more real-time Discussions the jenkin channel on free note is a great place to discuss with a lot of the other people What's going on in the jenkin's community? We also have jenkin's dash info for some of our info activity We also have jenkin's dash community for website documentation jams and things like that And you know similar to what alex said for the external workspace manager plugins Some plugins also have a specific getter chat or their own jenkin's IRC channel So check check out the readings of the plugins to see if there's a more specific chat where you can ask your questions But with that, I think that covers it for this jenkin's online meetup Thank you everybody for speaking and thank you those for Those who have tuned in and we'll we'll see you next time Mike