 pipeline happens in a stage. There's nothing running outside of stages. This will work better with visualization including the existing stage view plugin and the in beta for a little bit longer blue ocean UI for Jenkins. It's easier to look at any declarative pipeline and understand what it's doing than it is with a structured pipeline. Just as looking at any kind of configuration file is easier than looking at an arbitrary bash script because that's following certain rules it's more like configuration. The structure also enables round tripping with the visual editor which has got a beta coming out on Monday more detail on that later and the whole thing has been built with with and for blue ocean visualization so we've got markings in blue ocean for stages that are skipped due to a failure in an earlier stage for stages that are skipped because you can decided to conditionally not execute it so we've got a predictable execution model and better visualization of that execution model. Now the error reporting which is one of the things I have had the most fun working on. One of the biggest frustrations I've had with pipeline is running an hour long build and then discovering 55 minutes into it that I have a typo in a variable name or a parameter name and the whole thing barfs out. There's no way for me to know ahead of time without executing it whether I actually have the right syntax have the right parameter types etc. Declareative does syntax and validation checking before actually executing the build so the very very beginning of the build it goes through make sure the syntax looks right make sure that we've got the right types on all of our steps make sure that we're not doing something completely insane and errors out at that time at the very very beginning of the build with errors that are reported as compilation errors pointing to exactly where the error was telling you what the error was giving you suggestions for what you might have meant if you had a typo in a parameter name or something like that and through this we're able to eliminate many of the potentially confusing stack traces that pipeline can give you when things go wrong not all of them there are still some that can sneak through but I think a lot of the most commonly encountered and mystifying stack traces now instead will give you a clear error message about what's wrong in their definition of your pipeline rather than again waiting until the end of the build and saying oh yeah something went wrong and yeah we validate syntax we make sure we've got required configuration we make sure step parameter types match up and more so an important thing to touch on with the clarity of pipeline is is what this means for scripted pipeline in practice it doesn't mean anything both still exist declarative pipeline is very much built on top of scripted pipeline it's not a separate thing it's a new syntax for pipeline and we're now calling traditional pipeline scripted pipeline so that you can distinguish between the two and understand their roles the visualizations like blue ocean and stage view don't see any difference between a run of a declarative pipeline versus a run of a scripted pipeline because they're all still just generating stages and running steps within stages so it's not not a different thing it's still the same execution engine it's a different way to use that execution engine and scripted pipeline is still used inside declarative pipeline all of your step invocations you're still using those there's still escape patches where you can use a more the full set of scripted pipeline syntax without validation and you can always copy the by your steps and contents from a declarative stage into a scripted pipeline and they'll work just fine it's a subset of the scripted pipeline syntax so declarative isn't meant to cover absolutely every use case that scripted pipeline does it's meant to cover a lot of them the ones that are fairly standard that are fairly predictable fairly consistent and when you need to go beyond that when you've got more complex logic or you just can't quite fit it in declarative that's when you move on to scripted so why did we write declarative pipelines so first of all we've got the reasons why we think everyone should be using pipeline it's the future of Jenkins it's also the present of Jenkins now pipeline gives you durability so that your builds can continue despite a master restart or a disconnect between the agent and the master gives you pipeline is code the Jenkins file the ability to have your Jenkins build definition checked into source control versioned in source control alongside the rest of your code pipeline has a much more modern back-end implementation in the Jenkins internals than the traditional freestyle builds do which gives us a lot more potential for improvements and optimizations going forward and pipeline is more powerful and it's more flexible than traditional Jenkins jobs there was a great blog post that just went up last week I think on conditional build steps and how you can replace those in pipeline without jumping through quite the same weird hoops to put conditionality in a traditional Jenkins freestyle job so the way we see declared the reasons for declare is we want to have benefits for both new users of pipeline and existing users of pipeline we're going to touch on the new user reasons first and I think these don't just apply to new users but to they're particularly relevant there declarative has a lower barrier of entry than scripted pipelines it's not just throwing you at a blank text editor and saying here write a groovy script to run my build it has a sorry okay it has a predictable well-defined and documented syntax they can tell you what you need to do how to do it etc and as I mentioned there's the upcoming UI editor so that you'll be able to actually go to Jenkins write your job through the editor it'll save down into a Jenkins file and round trip etc so that you can get started without having to even touch the Jenkins file by hand we're not quite there yet with the editor but we will be soon declarative is more human readable again I said that it's more of a configuration field than a script feel and while not saying configuration files are necessarily easy to read they tend to be easier to read than scripts declarative does not require groovy specific expertise now I don't think that pipe that scripted pipeline necessarily requires a lot of groovy specific expertise either but it can feel that way and when you get into more complicated things it can be that way so we wanted to make sure people didn't get intimidated by having to think they really need to learn a new language or write scripts and something they're not comfortable with just to run their build so we think that declarative is a better experience for someone who's just getting started with pipelines and for someone who's not not writing a ton of them or it's just part of their job and well we hope that people will find that to be the case now while declarative is in large part directed at the kind of newer or more casual users we do think it will be really useful for the more advanced or existing users since declarative is using the same engine as scripted your existing investments in shared libraries and the like still apply there's not a case where you have to start over and learn a completely new thing you don't have to throw out all the work you've done just to switch over to using declarative and by expanding the usage and usability we're putting less burden on the Jenkins domain experts in a shop it's not like we want to make it so that you don't end up having to have one or two people who are the people who know how to write Jenkins jobs and it's so everybody else has to go through them to write a Jenkins job so it should hopefully lower the burden on the experts and empower everybody else more again since we're moving to this more configuration syntax we think that that's going to make collaboration and code review easier one thing the the error reporting I mentioned doesn't just work at the isn't just available at the beginning of the build there's also a CLI command and more to come to run that linting or validation against a Jenkins file without even having to run a build so you can get a faster feedback loop on whether you've actually written a valid Jenkins file and we also think there's a real value in separation of the Jenkins infrastructure related configuration like what are my agent names like what tools do I need installed from the step execution so that you don't have the configuration interwoven with the build steps so that you can more easily see oh this is the stuff that I need change when I'm copying it over to a new master or I can go as an admin go change the you know label everywhere but I don't have to worry about oh but somewhere deep in the steps of this one build maybe they do something different with it we think that that separation is going to lead to easier maintainability and easier scaling so now let's do a walkthrough of the syntax hold on one sec you to make sure my presenter notes show up you can't read this so we're gonna move on just an example of scripted pipeline and declarative pipeline don't worry there's a lot better visualization later I just wanted to keep this so I could make a point that it's not necessarily the case that declarative is going to be shorter for all of the pipelines in terms of text you know the scripted pipeline is shorter than the declarative pipeline declarative pipeline is a little verbose at times because we believe that the verbosity provides more information makes it easier to understand and use and thought that was that the problem is not necessarily that the pipeline could get too long it's that the pipeline could get too confusing so now we'll actually start looking at the syntax I will find some way to get links to the examples I've got here but for now so the first thing that's relevant is that right there the pipeline block everything in declarative goes within a pipe a block called pipeline and if it's not in there declarative doesn't care about it so we've got our own syntax within that that's not executed exactly the same way as scripted pipeline then the first thing we've got after that is declaring the agent that our job is going to run on and here we're saying that we want to run on in a Docker container we give the image name and declarative will automatically say okay they didn't say what label they want to run on they can run anywhere and you can specify the label but you don't have to and it will fetch the container image it will run the container and it will run the rest of the build within that container so it's a simpler way to be able to specify the configuration for your agents and can be overridden per stage not just at the top level there's other options for agent besides just Docker obviously there's label there's Docker file in which case it builds a Docker file from your repository and runs inside the resulting container and there's also some magic shortcuts for saying don't run on any agent for some weird edge cases or run on any agent but in general I think that you'll figure that out and plus we got docs options contains a few different things at the back end but we could think about it as four options for your pipeline that would apply across your entire pipeline so here we're setting the build discarded job property to make sure that after five you know when we run our sixth build the first build gets deleted etc and timeout here is the timeout step that wraps the entire build and if the build takes more than 30 minutes it will get killed and reported as having timed out so when we need to do things that are not just applying to a part of the build but for everything those show up in options parameters are traditional job parameters we've pulled them into their own section here so that it's a little more clearer than the way it is in scripted pipeline so string boolean param various things don't think we need to go into a lot of detail there the one thing worth noting is that in current versions of Jenkins from 217 onward there's a params variable in your scripted pipelines or declarative pipelines that will use the default value if you haven't already specified the parameters so you don't have to run the build once and then run it again to make sure it doesn't error out you still just get the default value on that first run but at least it doesn't error so that's something nice the next key part and probably I mean the bulk of declarative is stages you put all of your stages inside the stages block it's at least one and as many more as you want each stage takes a name and then it can take some configuration that will take a look at later and then a block of steps to execute and so each of these chunks of steps are executed in that stage they'll show up in the visualization in blue ocean so you can see how long that particular chunk took what the results were from that particular stage organization is a good thing having a better sense of the individual parts of your build is a good thing so we're enforcing that by requiring that everything be in a stage I mentioned that tri catch isn't needed to make sure you send an email at the end of the build that's another thing that really annoyed me about scripted pipeline is that if your build fails because of command fails or anything really unless you've wrapped that failure the section of code that could fail with a try catch or pipeline zone catch error step the build will just stop when it gets to the error and we'll never actually clean up afterwards or send an email to let you know it broke etc so we have the post section now post section actually is available both for the entire build and for individual stages it checks to see whether the current build status matches a condition so always meeting it always runs regardless of what the build status is and then we've also got success unstable failure there's also changed in case the build status is changed from the previous build and this is an extension point so we can add more conditions going forward so no matter what when the build ends we're going to gather the J unit tests and J test results and report on them if the build successful will get an email saying hey the build succeeded unstable will be notified there's test failures failure that there's build failures now this is a fairly simplistic example but here where we've attached this just to the specific stage so we're going to get these emails based on the results of this one stage not necessarily the whole build so you may not be wanting to necessarily send emails at the result of one stage but you may want to again check for you know archive unit test results even if the build failed like on a fine bugs check that comes after the unit test run or something like that you may need to do cleanup if you're running a more complicated integration test or something like that where you want to make sure the machine gets back into a pristine state before you run the next build now we have another stage here that has the when condition when is evaluated to determine whether we're going to execute the stage the example I have here is using one of the built-in conditions we're adding more there as well the built-in condition here is branch if the branch that we're on currently matches the pattern that we've given it in this case master then we're going to publish our artifacts to s3 if we're on a pull request branch or a feature branch or something like that well we're not deploying it we're not going live with it so we're not going to publish to s3 so we've got that conditionality that in what I think is an easier way than we've had previously there's also one that looks condition that looks to see whether an environment variable is set to a specific value and one that allows you to write a pipeline expression that should return a boolean ideally so you can do more complicated logic for your when checked you know is is it noon you know is it afternoon or is it morning for some reason you might actually care about that and then here we've got a step in vocation using the kind of ugly metastep syntax I just wanted to show that that actually can be done that you're not just limited to the more aesthetically pleasing in simple steps the legacy older steps that haven't updated to have the better syntax throughout all of pipeline can still be used within declarative now we've got our final stage which also is only running when we're on master and in this case we're using something that's coming from a shared library to show there is no actual step called with tower that somebody somebody wrote for an example and but it's still if you've got your shared libraries and you've got them available you can use them within the steps in declarative justice you can inscripted they work just fine the validation still can be used to some extent and you can nest state steps when you've got block scope steps that say okay everything within this block runs with access to tower with the credentials for tower and now the next example so here we're just using a label we're just saying run on any agent that has the Docker agents label on it we're taking advantage of a nifty trick with the environment section where you can set environment variables that will be available throughout the build what we're doing here is using a special function that's available in declarative to take the ID of credentials that you've configured in Jenkins and automatically put them into an environment variable that you can then use later on so this is a shortcut to make sure you don't have to jump through as many hoops using the with credentials step to make those environment variables populated so you can actually have access to your credentials throughout the build another section we've got here or directive we've got here is the tools directive this if you've used maven or JDK's or npm or a number of other tools in Jenkins you may have encountered that they can auto install on to if you've configured them on the master you can auto install them on the agents you don't have to make sure they're already installed etc and what we have here is a nice simple syntax for making sure that those tools get installed on to the agent before we run saying give me a maven tool with this configured version give me a JDK tool with this configured version the important thing to note here is that the tools syntax the tools directive and tools installer doesn't actually work on Docker containers I'm working on that that's a limitation in pipeline in general and I am working on that but if you're running on just straight agents this will work for you out of the box yeah this one's just a simple stage but I wanted to have an example that actually seemed kind of realistic so I wanted to make sure that I was actually running steps and here we're running two steps you can run as many steps as you want I have a tendency to have my examples tend to be just one step so I wanted to make sure that I showed yes you can do more than one step I know that may have been obvious but I was afraid it wouldn't be next step tests so here we're showing per stage configuration and specifically per stage configuration of agent so our first step that ran make clean and make package and generated our build artifacts was running on just the agent load and now we want to run tests but we want to run those inside a Docker container so we've specified the image that we're gonna build in and we specified reuse node true what that means is that the this stage is agent will run on the same agent that the previous stage ran on will have access to the workspace and the checkout and the artifacts we already built so we don't have to worry about copying them around etc everything's already available so that we would then run our you know shell step to actually do the deploy using that environment variable that we defined before with the credentials it's got access to the artifacts and has everything it needs to deploy without having to worry about stashing them between stages etc the reuse node field doesn't mean anything at the top level because there's no previous node to use but it's a handy trick for Docker and Docker file so that you don't have to check things out twice you don't have to build things twice you don't have to copy your artifacts around and so those are the two examples let's talk a little bit about the validation like I said this is my baby in this is my favorite part because I've gotten so annoyed at error reporting in declare in scripted pipelines and then the lack of easy visibility into what I did wrong because I do a lot wrong the first thing is as I said it always happens at the beginning of a pipeline build when you're declared a pipeline build starts it compiles and validates and make sure that your syntax is actually right and not just in the sense that the groovy script can compile because getting a groovy of a compilable groovy script is not very hard you can have very broken groovy that can still compile but once we're in that phase we're supposed to start doing the validation we look to see okay if you've got did you have a stage because if you don't have a stage what's the point we're gonna error out on you did you supply an agent name an agent type that that's actually existing and it's available on this master because if you try to do agent banana there's no agent implementation for banana it doesn't know what that is it should tell you that ahead of time and say well did you mean one of these and it will it'll do the same thing for parameters if you give it a an agent the wrong parameter if you give a step the wrong parameter that doesn't exist or you have the wrong type it's gonna tell you that so that you can know ahead of time oh right that's what I need to go fix and the errors point to the problem areas with line and column number with what I hope is a useful and internationalized though not yet localized error message hopefully with suggestions to point you in the right direction so my apologies for this text size but you know so here I got an error saying invalidate parameter you and did you mean unit because I did the timeout step and I got time right that's that's an existing thing 15 is the right type you and minutes wait no you unit okay so it's telling me what I did wrong where I did wrong and giving me an idea of how to make it right next year I'm calling the sleep step but instead of giving it a number I'm giving it a string and that's not a valid parameter type you can't really tell something to sleep for quote 10 minutes on quote and so it's telling me that it was expecting an it but it got 10 minutes so you might want to change that and what I've got an empty stage that doesn't have any steps in it it's reporting that there's nothing to execute within that stage and saying that's that doesn't fit into the syntax that's not allowed you've got to actually have something to do in a stage and so it's giving me a useful error message pointing at what went wrong giving me the line and column information same as you get from a compilation error I think that's really useful now you can do this linting without actually having to run the build the way that I recommend is using the Jenkins SSH CLI in which case you need your Jenkins administrator to open an SSH port need to make sure you have creds but that's we'll ignore that for the moment you SSH into the master call the declare declarative linter command and pipe in the Jenkins file and it will give you the same results the same messaging that you got when you run it in your build again it'll tell you if it's successful it'll tell you if it failed and if it failed it'll tell you where it failed and why it failed you can also do that via the REST API with curl the curl command is a little ugly because I made sure we're actually using Jenkins crumbs because you're you really should have crumb protection enabled so on your master it's a good security tip and we've got plans for I'm sorry yes the question was whether you could validate scripted pipelines using this as well no that's we would still like to eventually be able to do validation and linting for scripted pipelines but it's a lot harder problem than doing it for declarative that's part of why we wrote declarative is giving it with this this structure and the predictability we can we don't have to worry about things like oh what type is that well what you know that that random class what is that etc we actually know what all the possibilities are and can better analyze what could go wrong but so from for the foreseeable future no we will not have validation of scripted pipelines we do have plans to have a more flexible offline validator that doesn't require you to SSH and to Jenkins to do it that's not around yet but that is on our roadmap as well as GDSL for IntelliJ and other things to make the development and testing of your pipelines of your declarative pipelines easier we'll see where that goes and like I've said this is just 1.0 so it's we'll see where we end up taking things further I just want to mainly focus on what we've already got for you who here's her to Blue Ocean if you haven't you should check it out it's like pretty especially by Jenkins standards I mean by Jenkins standards it's gorgeous I say this is somebody who loves the traditional Jenkins UI because I've lived in it for nine years but so I'm not going to talk that much about Blue Ocean here because I am not a UI person but I did want to just mention a couple things that are declarative specific in Blue Ocean or related to declarative in Blue Ocean that we've got some special smarts on both sides for optimized visualization such as operations inside declarative like the SCM checkout like Docker image prep or building the contain the image for the container post-build actions things that are not specified inside your actual stages block but that still actually take time in your build they get marked with special behind-the-scenes synthetic stages so that declared so that Blue Ocean knows it doesn't have to put those in the main UI because you're not really as long as those don't fail you're not really concerned with seeing that in your visualization that's just the cost of doing business you know that there's going to be a little time for your checkout there's going to be a little time for your Docker image prep et cetera you don't need to have your your graph of stages necessarily include those and as I also mentioned that we've got special marking of stages that have been skipped due to either an unsatisfied when condition or a failure in an earlier stage so that those will show up differently in Blue Ocean so that you'll be able to see okay yeah this stage didn't run because there was a when condition that was not met so it'll be I think it's gray but don't quote me but so you won't have you'll you every single build you run will show in Blue Ocean will show every stage that's in the execution model even if they didn't run that time so that even if the build failed on the first stage it'll still mark that the second third and fourth stages existed it just they wouldn't have done anything and they'll be displayed in a special way so you can see that they were just skipped now just a little teaser on the editor like I said the editor will go into beta on Monday it's still not not quite done it doesn't yet do the round tripping being able to read a Jenkins file from your get repo and then write the changed Jenkins file to the get repo that's in the works it will happen before it goes one out but I wanted to show you a little bit of what the UI looks like pretty again it's it's like it's like a visual editor it's so you can specify your stages you can specify parallel execution by clicking that plus there and then you can see here we've gotten our test stage that we're executing both on Chrome and on Firefox and also on Internet Explorer and then we get our deploy stage there so that's just the basic graph that you end up getting it looks a lot okay almost identical to what you see for the the run visualization in Blue Ocean not a surprise it's part of the Blue Ocean UI theme now here we're specifying a shell script to run a shell step to run inside the Chrome parallel chunk so it's just standard put in some shell it will run and it is able to do the validation in real time so that when we use print messaging an echo and we don't require put in a parameter for it it's saying wait no that's that's not valid you need a parameter you can't run that without a parameter and it's giving you that validation right away through the editor without having to even wait to run a linting against it or run it in Jenkins which we think is going to be really handy so that especially for when you're getting started but also I still use the freestyle editor just like I just need to do something quick bam bam bam okay right it works and again it's we've been designing declarative with the editor in mind since day zero we've made sure that conversion between the the syntax that the is internal to the editor and the syntax that it actually runs in Jenkins that that conversion is seamless we've got innumerable tests making sure of that we've made sure the data model makes sense for the editor it's it's we very very much want to make sure the editor is a good usable tool for you with the clarity pipelines and that it can help well kill freestyle because I with the with the editor I personally feel that we're getting to the point where freestyle doesn't offer anything that you can't do better in pipelines either in declarative for most cases inscripted for the more complicated cases now I'm sure I'm wrong I'm sure there are things I'm missing but we'll get those two and so yeah Monday the beta comes out not sure how long it'll take to get to 1.0 but I expect it will be this spring and we're looking forward to you giving it a try and giving feedback and seeing what's horrible and what's wonderful so what is the 1.0 release I just want to wrap up with that since you know that's why I'm here 1.0 came out on Wednesday of last week it's in the update center we will be very very very careful not to make any breaking backwards compatibility changes if we do that's a bug and we will undo it but I think our tests are comprehensive enough that that's not going to happen by accident and I won't merge anything that does that either it's important to note that declarative does require Jenkins 271 or later it's the first LTS line of Jenkins 2 Blue Ocean has the same requirement who here's running Jenkins 2 and who here's running Jenkins 1x it's a good time to upgrade to say that I'm not be I'm not entirely being facetious a little but not entirely I think the two said the Jenkins 2 line is beyond mature at this point I think it's worth the upgrade I think that the improvements that it has over the 1x line for usability and UI are worth it and if you want new stuff you kind of got to go to so there are blog posts coming up and one already up on Jenkins.io they introducing declarative they'll be more detailed blog posts on some specific aspects like the syntax checking or the Docker integration this talk obviously is part of the 1.0 launch we're doing a Jenkins online meetup on Wednesday February 15th talking in more detail about declarative with more information more more deeper demo dives and some talk about the editor as well and there is by I think most open source standards and especially by Jenkins standards an immense amount of documentation up on Jenkins.io thank you Tyler and we it's now when you go to the pipeline documentation what when you go to the pipeline documentation on Jenkins.io it starts showing you declarative you can switch over to see the scripted but our plan going forward is that the default way you start is with declarative and so we want to make sure it's documented it's accessible it's understandable and if you find flaws in the documentation the things that could be improved pull requests are very much welcome as our bug reports see if you don't actually want to fix the docs yourself so resources like I said the main one is Jenkins.io slash doc that's the canonical place the definitive place the right place to go to find documentation on declarative and on scripted pipelines and pretty much anything else that Tyler actually gets around to writing about Jenkins I think that Tyler and his team have done an amazing job with the Jenkins.io docs and I think it's it's something we're proud of and that I think is really useful so don't be afraid to give it a look amusingly the examples actually pull requests actually didn't land yet so we'll work on that but I think that there's enough examples in the documentation that you'll be okay for now and if you really are curious you can find the source for the plug-in on github and pipeline model definition plug-in don't ask about the name you never have to think about that if you're on Jenkins 2.7.1 or later if you just update the pipeline aggregator that pulls in all the other plugins this gets pulled in with it so you don't have to jump through hoops to install it but in case you wanted to see the source I always feel the need to link it and we have reference cards as anybody already picked up a reference card from the Jenkins booth all right well I've got some more up here and they're also available online and we're pretty happy with them and yeah printed material and that's pretty much it so let's we got about four minutes for questions so does anybody have any questions yes the question was do we have plans to support stages in parallel so right now you can use parallel but it doesn't but then you have to specify your nodes etc within that it's it's it's it's ugly it's a little awkward so the answer to your question is yes I have a pending post 1.0 work in progress pull request doing exactly that to give you the ability to say here's a bunch of stages to run in parallel we're still working on exactly what the syntax will be exactly how the execution works but parallel stage execution will be in within the next six weeks or so would be my guess we consider that a requirement there are some things that Blue Ocean needs to do for better visualization of that so I need to bug them but that is we consider that a requirement we wanted to have that for 1.0 but we wanted to make sure that we focused on getting what we had really solid before adding that feature but it will be there soon I promise yes so the question was can one stage declare what is going to run which stage is going to run next and what steps are going to run in that stage so the the stage execution order is the Lexcorder it's the order that it specified in the Jenkins file I have played around with being able to say this stage can't run until these other stages are done or this stage when this stage finishes this other stage kicks off but I'm not sure if I'm going to be able to find a good syntax for that that actually makes sense that so for now the the it's just what's next in line but we'll see that's an area I'm definitely interested in to create more of an execution dependency graph but I need to figure out what the right way to do that is and if I can't find the right way I'm not going to do it we're not going to it's important it's really important to us that declarative continue to make sense and provide the both simplicity and power so if we have to make a compromise between full power and full understandability and usability in declarative we're going to go with usability because you can always switch to using a scripted pipeline when you need more power yes is there any way to enforce declarative right now no there are some ideas I would expect that if there they could be something from cloud bees that would require that but right now there is not a way to enforce requiring declarative for everything yes yes yeah the question was whether there's a way to get some of the power of full scripted pipeline without having to completely leave declarative pipeline there's a special step that's available called script it's just a script curly brace steps and if it's inside anything that's inside that script block doesn't go through validation so we're not checking it to make sure it fits the subset of the syntax we allow we're not making sure that the step parameters are valid we're allowing you to do if else for etc things that we don't allow you to do