 pipeline happens in a stage. There's nothing running outside of stages. This will work better with visualization including the existing stage view plugin in beta for a little bit longer Blue Ocean UI for Jenkins. It's easier to look at any declarative pipeline and understand what it's doing than it is with a structured pipeline. Just as looking at any kind of configuration file is easier than looking at an arbitrary bash script because it's following certain rules, it's more like configuration. The structure also enables round tripping with the visual editor, which has got a beta coming out on Monday, more detail on that later. And the whole thing has been built with and for Blue Ocean visualization. So we've got markings in Blue Ocean for stages that are skipped due to a failure in an earlier stage, for stages that are skipped because you can decided to conditionally not execute it. So we've got a predictable execution model and better visualization of that execution model. Now the error reporting, which is one of the things I have had the most fun working on. One of the biggest frustrations I've had with pipeline is running an hour long build and then discovering 55 minutes into it that I have a typo in a variable name or a parameter name and the whole thing barfs out. There's no way for me to know ahead of time without executing it, whether I actually have the right syntax, have the right parameter types, etc. Declarative does syntax and validation checking before actually executing the build. It's the very, very beginning of the build. It goes through, make sure the syntax looks right, make sure that we've got the right types on all of our steps, make sure that we're not doing something completely insane. And errors out at that time at the very, very beginning of the build with errors that are reported as compilation errors pointing to exactly where the error was, telling you what the error was, giving you suggestions for what you might have meant if you had a typo in a parameter name or something like that. And through this, we're able to eliminate many of the potentially confusing stack traces that pipeline can give you when things go wrong. Not all of them. There are still some that can sneak through. But I think a lot of the most commonly encountered and mystifying stack traces now instead will give you a clear error message about what's wrong in the definition of your pipeline rather than, again, waiting until the end of the build and saying, oh yeah, something went wrong. And yeah, we validate syntax. We make sure we've got required configuration. We make sure step parameter types match up and more. So an important thing to touch on with the declarative pipeline is what this means for scripted pipeline. In practice, it doesn't mean anything. Both still exist. Declared pipeline is very much built on top of scripted pipeline. It's not a separate thing. It's a new syntax for pipeline. And we're now calling traditional pipeline, scripted pipeline, so that you can distinguish between the two and understand their roles. The visualizations like blue ocean and stage view don't see any difference between a run of a declarative pipeline versus a run of a scripted pipeline because they're all still just generating stages and running steps within stages. So it's not a different thing. It's still the same execution engine. It's a different way to use that execution engine. And scripted pipeline is still used inside declarative pipeline. All of your step invocations, you're still using those. There's still escape patches where you can use a more, the full set of scripted pipeline syntax without validation. And you can always copy your steps and contents from a declarative stage into a scripted pipeline and they'll work just fine. It's a subset of the scripted pipeline syntax. So declarative isn't meant to cover absolutely every use case that scripted pipeline does. It's meant to cover a lot of them, the ones that are fairly standard, that are fairly predictable, fairly consistent. And when you need to go beyond that, when you've got more complex logic or you just can't quite fit it in declarative, that's when you move on to scripted. So why did we write declarative pipelines? So first of all, we've got the reasons why we think everyone should be using pipeline. It's the future of Jenkins. It's also the present of Jenkins now. Pipeline gives you durability so that your builds can continue despite a master restart or a disconnect between the agent and the master. It gives you pipeline as code, the Jenkins file, the ability to have your Jenkins build definition checked into source control, versioned in source control alongside the rest of your code. Pipeline has a much more modern back end implementation in the Jenkins internals than the traditional freestyle builds do, which gives us a lot more potential for improvements and optimizations going forward. And pipeline is more powerful and it's more flexible than traditional Jenkins jobs. There was a great blog post that just went up last week, I think, on conditional build steps and how you can replace those in pipeline without jumping through quite the same weird hoops to put conditionality in a traditional Jenkins freestyle job. So the way we see declarative, the reasons for declarative is we want to have benefits for both new users of pipeline and existing users of pipeline. We're going to touch on the new user reasons first and I think these don't just apply to new users but to, they're particularly relevant there. Declareative has a lower barrier of entry than scripted pipelines. It's not just throwing you at a blank text editor and saying, here, write a groovy script to run my build. It has a predictable, well-defined and documented syntax that can tell you what you need to do, how to do it, etc. And as I mentioned, there's the upcoming UI editor so that you'll be able to actually go to Jenkins, write your job through the editor, it'll save down into a Jenkins file and round trip, etc., so that you can get started without having to even touch the Jenkins file by hand. We're not quite there yet with the editor but we will be soon. Declareative is more human readable. Again, I said that it's more of a configuration field than a script field and while not saying configuration files are necessarily easy to read, they tend to be easier to read than scripts. Declareative does not require groovy specific expertise. Now, I don't think that scripted pipeline necessarily requires a lot of groovy specific expertise either but it can feel that way and when you get into more complicated things, it can be that way. So we wanted to make sure people didn't get intimidated by having to think they need to learn a new language or write scripts in something they're not comfortable with just to run their build. So we think that declarative is a better experience for someone who's just getting started with pipelines and for someone who's not writing a ton of them or it's just part of their job and well, we hope that people will find that to be the case. Now, while declarative is in large part directed at the newer or more casual users, we do think it will be really useful for the more advanced or existing users. Since declarative is using the same engine as scripted, your existing investments in shared libraries and the like still apply. There's not a case where you have to start over and learn a completely new thing. You don't have to throw out all the work you've done just to switch over to using declarative. And by expanding the usage and usability, we're putting less burden on the Jenkins domain experts in a shop. It's not like we want to make it so that you don't end up having to have one or two people who are the people who know how to write Jenkins jobs and so everybody else has to go through them to write a Jenkins job. So it should hopefully lower the burden on the experts and empower everybody else more. Again, since we're moving to this more configuration syntax, we think that that's going to make collaboration and code review easier. One thing, the error reporting I mentioned isn't just available at the beginning of the build. There's also a CLI command and more to come to run that linting or validation against a Jenkins file without even having to run a build. So you can get a faster feedback loop on whether you've actually written a valid Jenkins file. And we also think there's a real value in separation of the Jenkins infrastructure related configuration, like what are my agent names, like what tools do I need installed from the step execution so that you don't have the configuration interwoven with the build steps so that you can more easily see, oh this is the stuff that I need changed when I'm copying it over to a new master or I can go, as an admin, go change the label everywhere but I don't have to worry about, oh but somewhere deep in the steps of this one build maybe they do something different with it. We think that that separation is going to lead to easier maintainability and easier scaling. So now let's do a walkthrough of the syntax. Hold on one sec. You need to make sure my presenter notes show up. You can't read this so we're going to move on. Just an example of scripted pipeline and declarative pipeline. Don't worry there's a lot better visualization later. I just wanted to keep this so I could make a point that it's not necessarily the case that declarative is going to be shorter for all of the pipelines in terms of text. The scripted pipeline is shorter than the declarative pipeline. Declarative pipeline is a little verbose at times because we believe that the verbosity provides more information, makes it easier to understand and use and thought that was that the problem is not necessarily that the pipeline could get too long it's that the pipeline could get too confusing. So now we'll actually start looking at the syntax. I will find some way to get links to the examples I've got here but for now. So the first thing that's relevant is that right there the pipeline block. Everything in declarative goes within a block called pipeline and if it's not in there declarative doesn't care about it. So we've got our own syntax within that that's not executed exactly the same way as scripted pipeline. Then the first thing we've got after that is declaring the agent that our job is going to run on and here we're saying that we want to run on in a Docker container we give the image name and declarative will automatically say okay they didn't say what label they want to run on they can run anywhere and you can specify the label but you don't have to and it will fetch the container image it will run the container and it will run the rest of the build within that container. So it's a a simpler way to be able to specify the configuration for your agents and can be overridden per stage not just at the top level. There's other options for agent besides just Docker obviously there's label there's docker file in which case it builds a docker file from your repository and runs inside the resulting container and there's also some magic shortcuts for saying don't run on any agent for some weird edge cases or run on any agent but in general I think that you'll figure that out. And plus we've got docs. Options contains a few different things at the back end but what you could think about it is for options for your pipeline that would apply across your entire pipeline. So here we're setting the build discarded job property to make sure that after five you know when we run our sixth build the first build gets deleted etc and timeout here is the timeout step that wraps the entire build and if the build takes more than 30 minutes it will get killed and reported as having timed out. So when we need to do things that are not just applying to a part of the build but for everything those show up in options. Parameters are traditional job parameters. We've pulled them into their own section here so that it's a little more clear than the way it is in scripted pipeline. So string, boolean, param various things don't think we need to go into a lot of detail there. The one thing worth noting is that in current versions of Jenkins from 217 onward there's a params variable in your scripted pipelines or declarative pipelines that will use the default value if you haven't already specified the parameters so you don't have to run the build once and then run it again to make sure it doesn't error out. You still just get the default value on that first run but at least it doesn't error so that's something nice. The next key part and probably I mean the bulk of declarative is stages. You put all of your stages inside the stages block. It's at least one and as many more as you want. Each stage takes a name and then it can take some configuration that we'll take a look at later and then a block of steps to execute and so each of these chunks of steps are executed in that stage. They'll show up in the visualization in blue ocean so you can see how long that particular chunk took, what the results were from that particular stage. It's organization is a good thing having a better sense of the individual parts of your build is a good thing so we're enforcing that by requiring that everything be in a stage. I mentioned that try catch isn't needed to make sure you send an email at the end of the build. That's another thing that really annoyed me about scripted pipeline is that if your build fails because a command fails or anything really unless you've wrapped that section of code that could fail with a try catch or pipeline zone catch error step the build will just stop when it gets to the error and we'll never actually clean up afterwards or send an email to let you know it broke etc so we have the post section now the post section actually is available both for the entire build and for individual stages. It checks to see whether the current build status matches a condition so always meaning it always runs regardless of what the build status is and then we've also got success unstable failure there's also changed in case the build status is changed from the previous build and this is an extension point so we can add more conditions going forward so no matter what when the build ends we're going to gather the j unit tests and j unit test results and report on them if the build successful we'll get an email saying hey the build succeeded unstable will be notified there's test failures failure that there's build failures now this is a fairly simplistic example but here where we've attached this just to this specific stage so we're going to get these emails based on the results of this one stage not necessarily the whole build so you may not be wanting to necessarily send emails at the result of one stage but you may want to again check for archive unit test results even if the build failed like on a find bugs check that comes after the unit test run or something like that you may need to do cleanup if you're running a more complicated integration test or something like that where you want to make sure the machine gets back into a pristine state before you run the next build now we have another stage here that has the when condition when is evaluated to determine whether we're going to execute the stage the example I have here is using one of the built-in conditions we're adding more there as well the built-in condition here is branch if the branch that we're on currently matches the pattern that we've given it in this case master then we're going to publish our artifacts to s3 if we're on a pull request branch or a feature branch or something like that well we're not deploying it we're not going live with it so we're not going to publish to s3 so we've got that conditionality that in what I think is an easier way than we've had previously there's also one that looks a condition that looks to see whether an environment variable is set to a specific value and one that allows you to write a pipeline expression that should return a boolean ideally so you can do more complicated logic for your when check you know is it noon you know is it afternoon or is it morning for some reason you might actually care about that and then here we've got a step in vocation using the kind of ugly metastep syntax I just wanted to show that that actually can be done that you're not just limited to the more aesthetically pleasing and simple steps the legacy older steps that haven't updated to have the better syntax throughout all a pipeline can still be used within declarative now we've got our final stage which also is only running when we're on master and in this case we're using something that's coming from a shared library to show I mean there is no actual step called with tower that somebody somebody wrote for an example and but it's still if you've got your shared libraries and you've got them available you can use them within the steps in declarative justice you can inscripted they work just fine the validation still can be used to some extent and you can nest state steps when you've got block scope steps that say okay everything within this block runs with access to tower with the credentials for tower and now the next example so here we're just using a label we're just saying run on any agent that has the docker agents label on it we're taking advantage of a nifty trick with the environment section where you can set environment variables that will be available throughout the build but we're doing here is using a special function that's available in declarative to take the id of credentials that you've configured in Jenkins and automatically put them into an environment variable that you can then use later on so this is a shortcut to make sure you don't have to jump through as many hoops using the with credentials step to make those environment variables populated so you can actually have access to your credentials throughout the build another section we've got here or directive we've got here is the tools directive this if you've used maven or jdks or npm or a number of other tools in Jenkins you may have encountered that they can auto install uh onto if you've configured them on the master you can auto install them on the agents you don't have to make sure they're already installed etc and what we have here is a nice simple syntax for uh making sure that those tools get installed onto the agent before we run saying give me a maven tool with this configured version give me a jdk tool with this configured version important thing to note here is that the tools syntax the tools uh directive and tools installer doesn't actually work on docker containers i'm working on that that's a limitation in pipeline in general and i am working on that but if you're running on just straight agents this will work for you out of the box um yeah this one's just a simple stage but i wanted to have an example that actually seemed kind of realistic so i wanted to make sure that i was actually running steps and here we're running two steps you can run as many steps as you want i have a tendency to have my examples tend to be just one step uh so i wanted to make sure that i showed yes you can do more than one step uh i know that may have been obvious but i was afraid it wouldn't be uh next step tests so here we're showing uh per stage configuration and specifically per stage configuration of agent so our first step that ran make clean and make package and generated our build artifacts was running on just the agent load and now we want to run tests but we want to run those inside a docker container so we've specified the image that we're going to build in and we've specified reuse node true what that means is that the this stage's agent will run on the same uh agent that the previous stage ran on will have access to the workspace and the checkout and the artifacts we already built so we don't have to worry about copying them around etc everything's already available so that we would then run our you know uh shell step to actually do the deploy using that environment variable that we defined before with the credentials it's got access to the artifacts and has everything it needs to deploy without having to worry about stashing them between stages etc the reuse node field doesn't mean anything at the top level because there's no previous node to use but it's a handy trick for docker and docker files so that you don't have to check things out twice you don't have to build things twice you don't have to copy your artifacts around uh and so those are the two examples let's talk a little bit about the validation like i said this is my baby in this is my favorite part uh because i've gotten so annoyed at error reporting in declare in scripted pipelines and in the lack of easy visibility into what i did wrong because i do a lot wrong uh the first thing is as i said it always happens at the beginning of a pipeline build when your declarative pipeline build starts it compiles and validates and makes sure that your syntax is actually right and not just in the sense that the groovy script can compile because getting a groovy a compilable groovy script is not very hard um you can have very broken groovy that can still compile uh but once we're in that phase we're supposed to start doing the validation we look to see okay if you've got uh did you have a stage because if you don't have a stage what's the point we're going to error out on you uh did you supply uh an agent name an agent type that that's actually existing and it's available on this master because if you try to do agent banana there's no agent implementation for banana it doesn't know what that is it should tell you that ahead of time and say well did you mean one of these and it will it'll do the same thing for parameters if you give it a uh an agent the wrong parameter if you give a step the wrong parameter that doesn't exist or you have the wrong type it's going to tell you that uh so that you can know ahead of time oh right that's what i need to go fix and the errors point to the problem areas with line and column number uh with what i hope is a useful uh and internationalized though not yet localized error message uh hopefully with suggestions to point you in the right direction so uh my apologies for this text size but you know uh so here i got an error saying invalid parameter you went did you mean unit because i did the timeout step and i got time right that's that's an existing thing 15 is the right type you went minutes wait no you went no i mean unit okay so it's telling me what i did wrong where i did wrong and giving me an idea of how to make it right next here i'm calling the sleep step but instead of giving it a number i'm giving it a string and that's not a valid parameter type you can't really tell something to sleep for quote 10 minutes unquote and so it's telling me that it was expecting an int but it got 10 minutes so you might want to change that uh and when i've got an empty stage that doesn't have any steps in it uh it's reporting that there's nothing to execute within that stage and saying that's that doesn't fit into the syntax that's not allowed you've got to actually have something to do in a stage and so it's giving me a useful error message pointing at what went wrong giving me the line in column information same as you'd get from a compilation error i think that's really useful uh now you can do uh this linting without actually having to run the build uh the way that i recommend is using the jenkins ssh cli in which case you need your Jenkins administrator to open an ssh port and you need to make sure you have creds but that's we'll ignore that for the moment you ssh into the master call the declarer declarative linter command and pipe in the jenkins file and it will give you the same results uh the same messaging that you got when you run it in your build again it'll tell you if it's successful it'll tell you if it failed and if it failed it'll tell you where it failed and why it failed you can also do that via the rest api uh with curl the curl command is a little ugly because i made sure we're actually using jenkins crumbs because you're you really should have crumb protection enabled so on your master uh it's a good security tip uh and we've got plans for um i'm sorry yes uh the question was whether you could uh validate scripted pipelines using this as well no uh that's we would still like to eventually be able to do validation and linting for scripted pipelines but it's a lot harder problem than doing it for declarative that's part of why we wrote declarative uh is giving it with this this structure and the predictability we can uh we don't have to worry about things like oh what type is that well what you know that that random class what is that etc we actually know what all the possibilities are and can better analyze what could go wrong but so for for the foreseeable future no we will not have validation of scripted pipelines uh we do have plans to have a uh more flexible offline validator that doesn't require you to ssh and to jenkins to do it that's not around yet but that is on our roadmap uh as well as uh gdsl for intelligent and other uh things to make the development and testing of your pipelines of your declarative pipelines easier we'll see where that goes and like i've said this is just one dot oh uh so it's we'll see where we end up taking things further i just want to mainly focus on what we've already got for you uh who here's heard of blue ocean if you haven't you should check it out it's like pretty especially by jenkins standards i mean by jenkins standards it's gorgeous i say this is somebody who loves the traditional jenkins ui because i've lived in it for nine years but so i'm not going to talk that much about blue ocean here because i am not a ui person but uh i did want to just mention uh a couple things that are declarative specific in blue ocean uh or related to declarative in blue ocean uh that we've got some special smarts on both sides for optimized visualization such as uh operations inside declarative like the scm checkout like docker image prep or building the content the image for the container post build actions things that are not specified inside your actual stages block but that still actually take time in your build uh they get marked with special behind the scenes synthetic stages so that declarative so that blue ocean knows it doesn't have to put those in the main ui because you're not really as long as those don't fail you're not really concerned with uh seeing that in your visualization that's just the cost of doing business you know that there's going to be a little time for your checkout there's going to be a little time for your uh docker image prep etc you don't need to have your uh you know your graph of stages necessarily include those and as i also mentioned that we've got special marking of stages that have been skipped due to either an unsatisfied when condition or a failure in an earlier stage so that those will show up differently in uh blue ocean so that you'll be able to see okay yeah this stage uh didn't run because there was a when condition that was not met so it'll be i think it's gray but don't quote me uh but so you won't have you'll every single build you run will show in blue ocean will show every stage that's in the execution model even if they didn't run that time so that even if the build failed on the first stage it'll still mark that the second third and fourth stages existed it just they won't have done anything and they'll be displayed in a special way so you can see that they were just skipped uh now just a little teaser on the editor uh like i said the editor will go into beta on monday it's still not uh not quite done uh it doesn't yet do the round tripping being able to read a jankins file from your git repo and then write the changed jankins file to the git repo that's in the works it will happen before it goes 1-0 but i wanted to show you a little bit of what the ui looks like uh pretty again it's it's like it's like a visual editor it's um so you can specify your stages you can specify parallel execution by clicking that plus there and then you can see here we've gotten our test stage that we're executing both on chrome and on firefox and also on internet explorer uh and then we get our deploy stage there so uh that's just the basic graph that you'll end up getting it looks a lot okay almost identical to what you see for the the run visualization uh in blue ocean not a surprise it's part of the blue ocean ui theme now uh here we're specifying a shell script to run and a shell step to run inside the chrome parallel chunk so it's just standard put in some shell it will run uh and it is able to do the validation in real time so that when we use print messaging you know echo and we don't require put in a parameter for it uh it's saying wait no that's that's not valid you need a parameter you can't run that without a parameter and it's giving you that validation right away through the editor without having to even wait to run a linting against it or run it in Jenkins uh which we think is going to be really handy uh so that especially for when you're getting started but also i still use the freestyle editor just like oh i just need to do something quick bam bam bam okay right it works uh and again it's we've been designing declared it with the editor in mind since day zero uh we've made sure that conversion between the the syntax that the is internal to the editor and the syntax that it actually runs in Jenkins that that conversion is seamless uh we've got innumerable tests making sure of that we've made sure the data model makes sense for the editor it's it's we very very much want to make sure the editor is a good usable tool for you with declarative pipelines and uh that it can help uh well kill freestyle um because i with the with the editor i i personally feel that we're getting to the point where freestyle doesn't offer anything that you can't do better in pipelines either in declarative for most cases and scripted for the more complicated cases now i'm sure i'm wrong i'm sure there are things i'm missing but we'll get those two uh and so yeah monday uh the beta comes out uh not sure how long it'll take to get to one oh but i expect it will be this spring and we're looking forward to you giving it a try and giving feedback and seeing what's horrible and what's wonderful so uh what is the one over lease uh i just want to wrap up with that since you know that's why i'm here uh one oh came out uh wednesday of last week it's in the update center uh we uh will be very very very careful not to make any breaking backwards compatibility changes if we do that's a bug and we will undo it uh but i think our tests are uh comprehensive enough that that's not going to happen by accident and i won't merge anything that does that either uh it's important to note that declarative does require jenkins 271 or later it's the first lts line of jenkins 2 uh blue ocean has the same requirement uh who hears running jenkins 2 and who hears running jenkins 1x it's a good time to upgrade 27 that i'm not i'm not entirely being facetious a little but not entirely uh i think the two the jenkins 2 line is beyond uh mature at this point i think it's worth the upgrade i think that the improvements that it has over the 1x line for usability and uh ui are worth it and if you want new stuff you kind of got to go to two so um there are blog posts coming up uh and one already up on jenkins.io they're introducing declarative there'll be more detailed blog posts on some specific aspects like the syntax checking or the docker integration uh this talk obviously is part of the 1o launch uh we're doing a jenkins online meetup on wednesday february 15th talking in more detail about declarative uh with more information uh more more deeper demo dives uh and some talk about the editor as well and there is by i think most open source standards and especially by jenkins standards an immense amount of documentation up on jenkins.io uh thank you tyler uh and we uh it's now when you go to the pipeline documentation what when you go to the pipeline documentation on jenkins.io it starts showing you declarative you can switch over to see the scripted but our our plan going forward is that the default way you start is with declarative and so we want to make sure it's documented it's accessible it's understandable uh and uh if you find flaws in the documentation the things that could be improved pull requests are very much welcome uh as our bug reports so if you don't actually want to you know fix the docs yourself um so resources like i said the main one is jenkins.io slash doc that's the canonical place the definitive place the right place to go define documentation on declarative and on scripted pipelines and pretty much anything else that tyler actually gets around to writing about jenkins um i think that tyler and his team have done an amazing job with the jenkins.io docs and uh i think it's it's something we're proud of and that i think is really useful so uh don't be afraid to give it a look uh amusingly the examples actually uh pull requests actually didn't land yet so we'll work on that uh but i think that there's enough examples in the documentation that you'll be okay for now and if you really are curious you can find the source uh for the plugin uh on github and pipeline model definition plugin don't ask about the name you never have to think about that if you're on jenkins 271 or later if you just update the pipeline aggregator that pulls in all the other plugins this gets pulled in with it so you don't have to jump through hoops to install it but in case you wanted to see the source i always feel the need to link it and we have reference cards has anybody already picked up a reference card from the jenkins booth all right well i've got some more up here and they're also available online and uh we're pretty happy with them uh and yeah pretty material and that's pretty much it so let's we got about uh four minutes for questions so does anybody have any questions yes do you plan to support any stages in parallel because you would like to add the question was uh do we have plans to support uh stages in parallel so right now you can use parallel uh but it doesn't but then you have to specify your nodes etc within that it's it's it's it's ugly it's a little awkward so the answer to your question is yes i have a pending uh post-1o uh work in progress pull request doing exactly that to give you the ability to say here's a bunch of stages to run in parallel uh we're still working on exactly what the syntax will be uh exactly how the execution works but parallel stage execution will be in within the next six weeks or so would be my guess uh we consider that a requirement there are some things that blue ocean needs to do for better visualization of that so i need to bug them but uh that is we consider that a requirement we wanted to have that for 1o but we wanted to make sure that we uh focused on uh getting what we had really solid before adding that feature but it will be there soon i promise yes so the question was can uh one stage declare what is going to run which stage is going to run next and what steps are going to run in that stage so the the stage execution order is the lexical order it's the order that it's specified in the uh the Jenkins file i have played around with being able to say this stage can't run until these other stages are done or this stage when this stage finishes this other stage kicks off but i'm not sure if i'm going to be able to find a good syntax for that that actually makes sense uh so for now the the it's just what's next in line uh but we'll see that's an area i'm definitely interested in it would be more of an execution dependency graph but i need to figure out what the right way to do that is and if i can't find the right way i'm not going to do it we're not going to it's important it's really important to us that declarative continue to make sense and provide the uh both simplicity and power so if we have to make a compromise between full power and full understandability and usability in declarative we're going to go with usability because you can always switch to using a scripted pipeline when you need more power yes is there any way to enforce declarative right now no there are some ideas i i would expect that if uh that it could be something from claudby's that would require that but right now there is not a way to enforce requiring declarative for everything uh yes uh if we need the power of a scripted pipeline but want to you know have our main pipeline be declarative is there a way to hook in just you know the necessary pieces yes yeah the question was uh whether there's a way to get some of the power of full scripted pipeline without having to completely leave declarative pipeline uh there's a special step that's available called script it's just script curly brace steps and if it's inside anything that's inside that script block doesn't go through validation so we're not checking it to make sure it fits the subset of the syntax we allow we're not making sure that the step parameters are valid we're allowing you to do if else for etc things that we don't allow you to do