 Patrick, you can begin. Patrick, if you're speaking, we can't hear you. I got muted somehow. Sorry about that. Apologies for a little bit of cough here. So again, welcome to today's online jam. Thanks for joining us. Today we're going to talk about declarative pipeline and some of the new things that are coming in the Blue Ocean in the future as well for that. So to begin, a little background on that. About a year and a half ago, Jenkins passed 100,000 installations for the number that are being reported back to the stats for Jenkins. Today, we're around 160,000 installations of Jenkins. Of that, we've got almost 100,000 Jenkins 2.0 or 2.x installations out there. Around 600,000 agents and around 2 million users across multiple millions of jobs out there. And according to the latest survey, 90% of the people using Jenkins have seen their use of Jenkins in their businesses grow. So we're seeing more and more uptake of Jenkins across the organizations. Last year, under a year ago, I think in April, last year, right around March, April timeframe, we released Jenkins 2.0. With that, the three major pillars that we wanted to look at that were pipelines, usability, and getting more users for scale on Jenkins. So this today is a little bit about mostly pipeline, but also around usability, specifically around pipelines for that. So again, we're going to be talking about pipeline mostly today. Hopefully by now, almost everybody who's been exposed to Jenkins has heard of pipeline as code. It was something we came up with with the pipeline plug-in in Jenkins as a new way to do builds and also to extend that to continuous delivery in your organization. And with that, we created what was called the Jenkins file that you can store in your repository so that you can actually have your continuous delivery pipeline be auditable, make it more portable, move it from Jenkins to Jenkins. It's also durable because pipeline will restart from a crash if the master goes down or you need to restart the master for maintenance reasons, which is another big reason why pipeline was redesigned for that. It's extensible. We have the shared library, so you can actually add more features to that. It's also extensible via plugins. And one of the goals with that was to make your pipelines be more dry so you didn't have to recreate the wheel every time you did that. And by automatic, I mean that when you go into a new environment with an organization folder or multi-branch, when you add the Jenkins file to your repository, it automatically creates that job within your Jenkins and start running it, which is a big thing so you don't need to go in and configure a job every time you want to do that in Jenkins. It's much more automatic and everything's automated for that. And that also means automated pull request building within those organization folders as well. So with that, because we had pipeline, and pipeline originally was a groovy DSL. It's still a groovy DSL, but that meant you need to have some knowledge of groovy to be able to actually create your pipelines in there. We wanted to actually extend that to more users to make the barrier to entry for making those Jenkins files a lot lower. And with that, looking at the number of users across that, there's quite a few users in the intermediate and beginner range that aren't necessarily going to be the people to pick up pipeline and learn groovy to do this. And we wanted to be able to extend that back so that everyone from beginners to intermediates to experts can actually use pipeline and take advantage of the Jenkins file across their organization. So some of the goals we had with declarative pipeline, the first was to separate all the concerns. And by that, with Jenkins, you need to be able to understand how the build agents are set up and set up everything else around your pipeline and then construct your pipeline within that. So we wanted to actually separate the structure of the pipeline from what the pipeline was actually doing. So you could actually create more consistent pipelines across that without having to understand all the Jenkins internals behind the scenes. The next big goal was to actually have a lit path so you could actually create pipelines consistently in a very best practice manner as opposed to having every pipeline be a snowflake. It's not going to be coded. You don't have to worry about coding styles. You have one set of ways to create a pipeline and then the differences in what the steps are within the various stages and also what the stages are for that. One of the other big advantages of this is that it's more predictable. I can look at a pipeline that's been written in declarative syntax and understand what it's going to do much faster than if it's written in an imperative programming language where I have to go through and figure out what all the programming steps are going to do to create that pipeline. So it makes it much easier to quickly look at a pipeline, know exactly what it's going to do and go from there. This also allows us, as you'll see later, to have the GUI on top of that to construct that pipeline because it is a predictable model that's being used for declarative. Lastly, the big goal was to be able to pull in all major stakeholders in a continuous delivery pipeline that could actually look at the pipeline, do code reviews because it's easy to grok and easy to understand so more people can be involved in the process, contribute to that and write those pipelines or at least do reviews on those pipelines without having to understand the intricate details of all the groovy that's possible within the scripted pipeline for that. One other note before I hand this off to Andrew and I'm sorry, I'm running through this quickly just because I'm sick is that we've updated all the documentation on Jenkins.io. If you go to Jenkins.io docs, Tyler's done a great job of updating of those documents and Liam's contributed a lot and more and more blogs are going to be released over the next couple of weeks for a lot of these things for declarative as well. So there's been a lot of push on getting the information out about this new way to create pipelines for that. So with that, I'll turn it over to Andrew and he can start showing the syntax in action and the structure of that syntax. Okay, I'll stop the cursing everywhere now. Patrick, could you mute? Thank you. So I'm going to do a walkthrough of some of the syntax for declarative pipelines. Bobby Sandel will be talking some more about some other parts of the syntax, but let's do a quick walkthrough here. This is publicly visible. There's the URL in case you care. Look, repo. But we'll go through this and just take a look at what's involved in the syntax. So first of all, the most important part is that everything in a declarative pipeline is nested within a pipeline block in your Jenkins file. Anything outside of the pipeline block doesn't count as declarative. Everything is not parsed as declarative. We'll not round trip with the editor and more. So, yeah, if you want to use a declarative pipeline, you've got to use that pipeline block. The first directive we're going to look at here is agent, which defines where the pipeline will run. The example I've got here is a pretty simple one. This is label nothing, which means it can run on any label. We're just on agent any. That has the same meaning, but I wanted to show that agent normally is a block, not just a simple expression like agent any or agent none, which are two special cases. There's other possible built-in agent types. As I mentioned, there's agent none, which you can use at the top level of the declarative pipeline to make sure that you're not running on any agent, but in that case, you will need to specify your agents in each stage explicitly. So, generally, you're not going to want to use that. There's also a Docker agent type, which Bobby will go into, and Dockerfile, which we will not be touching on much today, on the documentation and all that. The next directive that I'm going to look at here is the tools directive. The tools directive works with the tools that Jenkins is able to do auto installation, such as Maven, JDK, and plenty of other plugins provide additional tools such as Gradle, NPM, and a lot more. And it is important to note that tool installation doesn't work quite right with Docker containers currently. So this is useful when you're running on a label, but not as much when you're running with Docker or Dockerfile, so be aware of that. The syntax for declaring your tools is pairs of tool symbols, and the installation name that's configured in your Jenkinsmasters tool configuration. Not all tools actually have those simple string symbols, and if they don't, they won't work in the tools section declarative. So when you run into a case like that, when you get an error, because FooTool does not show up right, open a ticket against that tool, and to add that symbol, and then they'll be fine. But here, we're going to see when I actually run the build that there's going to be a validation issue. Because there isn't actually on the master I'm going to be running this on, a Maven tool configured named Maven338. So we'll touch on that again when I actually go to run this. The environment section, strangely enough, sets environment variables. This is another thing that Bobby will look at in a little more detail with the credentials function that you can use inside the environment block. Environment variable identifiers need to be both valid bash variable identifiers since they end up in the environment and valid groovy variable identifiers or the syntax fails. If you don't use a valid identifier, you will get an error at validation time. While I don't have an example of that failing here, I will show you what those validation failures in general look like. You can't do more complicated groovy expressions or nesting of environment variables like Foo equals Bar, Bar equals Dollar Foo currently. But that should be possible because it's not too long once a pending pulver class gets merged and released. So we are moving forward and making sure that we're fixing the gaps, but not everything necessarily works perfectly right now. The stages section contains a list of stages, strangely enough, and at least one stage is required. Here's what a stage definition looks like to start. And every stage must contain at least one steps block with at least one step. There are additional sections, a couple of which I'll look at below, but the only one that's required within a stage is the steps block. And a steps block contains a set of steps as you would use in scripted pipeline as well. There are some exceptions here that you will find errors about if you try to use. We don't allow variable declaration, you know, Foo equals Bar. We don't allow if statements or loops or method calls on objects. You can use all of that stuff, but if so you have to do it within a script block, which I don't have as an example here. You can use steps that take another block of steps as one of their arguments, though, like here where I've got a timeout step, except, again, I've got another couple of validation issues. I didn't use the right type for the value for time, and I have a typo in unit. Again, we'll look at that in a little bit, and you'll see the error that that produces and then we'll fix it. So the steps here are pretty minimalistic. We're just running some echoes because we're running in that timeout theoretically. It shouldn't take longer than, well, true minutes, but that's obviously wrong. It should be five. And here we're showing that the Maven installation that we did above, again, once we fix the validation, we'll create the version number that we're expecting. So it's auto-insolved Maven, made sure it's on the path, made sure the JDK is set as well so that we can actually run Maven. The post directive can be used both on an individual stage, as I'm doing here, or for the entire build. Here I'm just doing some simple success and failure. So if the build succeeds, we echo one string. If the build fails, we echo a different string. There's also unstable, always and changed. Always will always run. We'll actually have an example of that later. Success, failure, and unstable will run if the build currently has that status and change will run if the build's status at this point is different than the previous's build status. So now we get to our second stage, and we see that we can do things like override tools, environment, and agent on a per-stage basis. So here we're overriding the original Maven tool with a different version. So now Maven version should give us a different value than it would have before. Now here's a third and final stage showing how to use parallel in declarative. Parallel can be used, but if so, it has to be the only step that you invoke in the steps block. You can't combine parallel and other steps to enable a good validation model on the back end. There is some syntactic sugar coming to make this easier so that you can have whole stages executed in parallel, but that has not arrived yet. So for now, these parallel blocks will all run on the same agent that the rest of the build is running on or that the particular stage is running on. But it's the same parallel syntax as in scripted. Again, I'm not doing anything particularly meaningful here, but... And now here's our post build post. So as I mentioned, always, always runs. It's the first one that runs. It runs before any of the other post conditions are evaluated. So no matter what, at the end of the build, even if the build is failed, we'll delete the directory and wipe the workspace. And again, a simple success and a simple failure. Here, using the mail step, just to let us know that a build has passed or failed. And one of the more advanced options, I guess, would be the options directive. This contains configuration that applies across the whole job. For example, we've got the build discarder set up here to make sure that we're only keeping 10 builds at a time so that we don't fill up our storage. And since we'd like to make sure this build doesn't take an hour, let's set up a timeout that applies across the entire build rather than just a few steps. So let's... Are there any questions yet? No, okay. Then let's see if this runs. Let's go to Blue Ocean here. I've already run a couple runs. First one failed, second one passed, but now this one should fail. Let's see what happens. Yep, it failed because I added validation errors. Let's take a look at what those errors look like. So before it's gotten any further in the build than just checking it out and parsing the Jenkins file, it's looking for errors and telling me what those errors are. So here it's telling me that at a particular line that I've got a wrong type parameter type for the time parameter for timeout and that I've got an invalid parameter name along with a suggestion for what the correct one is. And then we get the additional validation here about the tool where it says, wait, there's no 338. Did you mean 339? So we've got that nice validation that runs before you're actually executing the build so that you don't have to wait an hour and a half to find out that you've got a typo in a parameter name or the wrong parameter type or something like that. You can also do this with the Jenkins CLI and the editor does this validation more or less in real time. So let's go fix those errors. Where do I have that? Ah, there we go. All right, so let's edit here. So that first error, ah, hold on, was because we had a bad Maven version in here. So let's change that to 333, which is what we meant to do. So that validation error goes away. Our next validation issue, yeah, let's do that for five minutes, a unit, and that should actually work. Now let's scroll down, fix those pesky validation issues, commit the changes, and let's run it again and see what happens. Do-do-do-do-do. It's taking a little while. It's probably installing things. Let's see what's going on here. So we can see here that it's executing. It's in the first stage currently. Ah, it has completed. So you can see that the Nifty and pretty blue ocean visualization, if you haven't seen it already. Now if you look at the output from that first stage, we can see what Maven version it said. Ah, it said 333 as we wanted it to. It automatically installed it. It also complained a little, but I'm not going to worry about that. Now in the second stage, we're overriding the Maven version. Let's see what version it gives us here. Ah-ha, see, it did correctly override that Maven version and gave us 339. And then we've got our three parallel stages, three parallel blocks, rather, each of which are printing who they are, where they are, and that's pretty much it. So any questions that I should answer before I go back on mute? No questions, Andrew. Great, thank you very much. Okay, so for those wondering where to ask questions, it's in the Jenkins channel on FreeNode, an FYI, and so I'm going to show some bits of how to do a bit more advanced stuff, even though the declarative is not really meant for super advanced things, but we can still do some cool stuff in here. So my example is also on my GitHub account, the Spring Pet Clinic that I've worked and written a Jenkins file for that. So starting off, I am getting an agent on the Docker label, and I start off my stages quite directly. I start with the build stage, and here I don't want to use tool installers because Docker is much cooler, I think. So I say here that I would like to run inside the Maven 339 Docker image, and I want to reuse the node that the pipeline is currently running on. So I get an agent up here that I'm running on, and I'm then sort of starting up a Docker image there to run this stage. And in my silly example, most developers think that unit tests are running too slow, so they only want to run find bugs here. So we run find bugs, we skip the tests, and then in the post section, if we are successful, we'll collect those find bugs reports. Then we move on to the test stage. Again, starting up a new Docker image with the same image name. We reuse the same agent, meaning that we are also keeping the workspace. So we don't have to do a checkout again, basically. And here is the condition that Andrew hinted to earlier, I think. So we will only run this stage if the branch matches this pattern. So if we are on the master branch, this stage will run. And simply as an example here, we run the package with the integration profile set. Don't look in the POM file because there is no integration profile set. But as an example, we could run the integration tests here. And then we will always here, since archive artifacts, we can tell that step to not fail the build if there are no artifacts. Even if the build failed up there, we can always make sure to try and archive the artifacts. And then if we are successful or unstable, because find bugs could have marked this as unstable, for example, we'll collect the unit test reports. And moving on to the next stage. We do a release test, but we will only do the release test stage if the branch matches release-something. And just for Giggles, I'm running the Maven command and here I'm using some groovy G-string magic to take the branch name and drop everything that comes after the last dash and then adding the build number, meaning that if the release branch is release-1.0, the version I'm sending in here would basically be 1.0. Whatever build number we are running currently. So just to show the difference, everything within these brackets is interpreted by groovy, and this is just sent out to the shell command as it's meaning that bash will actually do this part just so that you can understand the difference there. And then same here always. We archive the artifacts since if we are in the release branch, this test stage won't run because the branch name is not called master. So we'll archive the artifacts here, same thing again. We allow the empty archive so we can safely run this post stage always. And success or unstable, we run JUnit. And here also I'm running the release profile. And then the last stage will actually perform the release. And here is a different way of... There are a couple of different when conditions you can use. You can use branch, there is an environment, and then there is also an expression. And inside the brackets here, you can use any groovy expression basically. And the sad part about using the expression is that there won't be any of that nice validation for you that Andrew showed before. So this will just be passed on to normal scripted pipeline as usual. And if you look at my commit history, you might see that I've had some issues with my groovy stuff. I needed to do different commits to fix that. But here's just another way of just looking at the branch name. So I'm checking if the branch name contains release and if the current result is not meaning that the JUnit tests before fail. Because when we're collecting the JUnit tests up here, if there are any test failures, the build will be marked unstable, meaning that this stage will be skipped. Because it will only run if we don't have a result yet. And then for the environments, I am grabbing the credentials with the ID SSH Bob. And I'm adding those to the release or the rel prefix. So available to my steps or in the environment for my steps, there will be a rel usr and a rel psw as well as a rel that contains the user column password. And then I run my perform here, the perform release with the username and the password that is available as environment variable. And I'm sending away an email, but my local machine doesn't can't relay emails. So just to make sure that it works, I'm echoing this out here. Any particular questions so far? I can see anything. So if we look here in blue ocean, you can see that I have a couple of different branches. So on the master branch, you can see the latest build here. Since we're on master, we ran the build and the test stages were run, but we skipped the release test and release push. Always keep forgetting that that's a dialogue. While on the my dev branch, for example, it's not called master, so we are actually just only running the build stage. And on, for example, release 1.5, we ran build, we skipped test because we're not on the master branch, release test and release push were executed. And let's see if I can find, yeah, if we look at release push, we can actually see that the version is, can I see the build number here? Yeah, there. So this is build number 14 and what we're actually released was 1.514. Okay. So that's quick and easy. Thank you. I think I'm done now. All right. Thank you, Bobby. Okay. So I am going to talk to you about the Blue Ocean pipeline editor. And this is currently in preview and we'll go over some of the various things that it does have a demo and look at what is planned for the future. So why do we need a visual editor? You know, I think if we go back to why do people use Jenkins, we can answer this question fairly simply. There are lots of reasons to use Jenkins. It does great things. But a lot of developers use Jenkins because you don't really have to write any code to get rolling. You know, you can go and configure some kind of a job just using configuration forms and that's very appealing to a lot of people because you're already busy writing code for your own application. And then, you know, using the freestyle, the experience is very easy. You just use a plug-in. You pick some of the different things that you need. You fill out the fields. So the visual editor is really a way to bring that ease of use of the freestyle builds, you know, into a context that everybody can start getting the benefits of pipeline. And, you know, one of the other things that it does is puts things in one place. So, you know, I can remember countless times chaining together countless downstream jobs and configuring a bunch of different things just because I needed to with parameterized triggers and all those things. But by being able to put this into the pipeline form, you're able to leverage all the stuff that you can do there in sort of a centralized place. So, you know, again, why a visual editor? Well, pipeline is great, and especially the declarative, it's really kind of made it a lot easier and more approachable to make pipelines. But it still does have a moderate barrier to entry. You know, you need to know what steps to type. You need to know what parameters they have. You need to know what the data types are. As Andrew showed you, you know, you get validation errors at runtime. And it would be nice to know these things up front. And, you know, it's just another thing, as I said, for developers to learn when, you know, a lot of people are just busy doing other things. And the other thing, of course, is just it's not easy to visualize a text file sometimes, especially if it's a fairly large one that's spread out across a bunch of different parallel nodes and things like that. So, having the pipeline editor certainly is one of those things that helps to kind of, you know, get the story right in the beginning. So, who is it for? Probably not targeted as advanced users. And it only supports declarative pipelines. So, you know, one of the questions actually that just came up in chat was, you know, about doing some dynamic different stuff that declarative didn't actually support. And so, you know, those types of users are still going to probably have to go use the standard scripted pipeline. But really, anyone who wants the easiest way to get started with pipeline, the editor is, you know, something that's available to, you know, show you kind of what's there and give you guided forms and that sort of thing to, you know, to build the pipeline out. And so, just a brief history about this. When pipeline was kind of a year ago, there were a few of us that were, it really pumped about it and wanted to make it more accessible for everyone because we could see all the benefits that people got from it. But, you know, so this is not the first time there were some pipeline editors created, although the other ones were some quick prototypes. So, it all sort of started with Michael Neal spearheading this, you know, with something like, hey, we should be able to create something that generates pipelines pretty easily. And so, he whipped up a JavaScript app, you know, in short order. And we could see, hey, this might work okay. And, you know, then I actually took this a little over a year ago and whipped up a different one. And in this one, we decided to go with React. And so, we made some forays into that and it actually ended up being such a good experience that it sort of paved the way for Blue Ocean to go that way as well. But the thing that we really ran into was the fact that we didn't have any two-way conversion. So, you know, this editor basically was using some kind of a separate file which was really not going to work for things like multi-branch. So, you know, just a quick recap here. Pipeline in Blue Ocean. You know, Blue Ocean is built for pipeline and that's, you know, the focus pretty much from the get-go. And it has excellent visualization of pipelines. You know, one of the things is it shows parallelism, you know, in way the stage view doesn't show and a lot of the other things didn't really show and that's just kind of fundamental to what we wanted to do to, you know, to really show people what their builds were doing. But then, to take that a step further, you know, we wanted to make sure that it was consistent between running and editing so that, you know, if you go and you make a pipeline that has some parallel stages, you know, you should expect to pretty much see the same thing when you go and execute it as well. So, you know, the pipeline editor basically is that, that, you know, it's there. It's here to lower the barrier of entry to use pipeline. You know, just, again, to fill the void where freestyle builds were really easy because you could click your way, you know, to having something run. And then also to provide a visual representation of how your pipeline is going to be executed serially and parallel, you know, across the, how you've configured it. As well as doing things like showing errors as soon as possible. So, you know, again, referencing Andrew's example, you know, while it's awesome that we get errors, and that's like the validation is amazing, it would be really nice to know that we had them right away before we've been running the pipeline. So, you know, a lot of things are validated up front and you can see in the editor exactly where the errors occur. Another thing is that similar to Blue Ocean, the editor is kind of design driven. So it's, you know, it's being thought about and taken down a path where, you know, we've got design work going up front to make sure that, you know, this is something that is going to be really usable and approachable for everyone. And one other small bit is that the pipeline editor in the preview, it does a thing where it just saves your work. So, let me swap over here to demo. And okay, so what we're looking at here is the pipeline editor. This is just a very basic pipeline that doesn't really do much of anything. It just has a single stage called build. But you can see quite clearly it starts, you know, somewhere it goes to the build and it's got a couple of things. You can add serial steps or you can add parallel steps. The other thing is when you're looking at this main view here over on the right, this is basically where the editing happens. And so you are presented at first with the general pipeline setting. So for example, where is the build going to run? So you could pick, you know, the Docker, any of the particular plugins that are there, but you could pick Docker. Let's say we want to run HTTP D2.2.12 or something like that. You can give it some args. And that's, you know, just a very quick and easy way to set up an agent. You can set up environment variables just by adding new ones in here. So, for example, you could get the user if you want that to be user Apache. But home folder or whatever and put that somewhere like that. And so that's sort of just the very basic top-level editing that the editor provides. And it's going to have some more things there. I'll talk about that a bit later. And then to edit a particular stage, you just click on the stage and you get the particular editor over here if you want to change the name. I'll just type right there. If you want to delete the stage, you can just click these little dots here and do the delete. And then it shows you a list of all the particular steps that are there. And you can see this one just has a print message with a hello world. And so clicking on that would bring you, bring up the editor for that particular step. And you see that that's really all it has there. And we can go ahead and add some different steps. So maybe, maybe we'll just rename this to build and add a step. And we want to call some Maven command and green install tests because we just want this to be a quick build. And we get, you know, something, all the steps kind of listed out there. The other thing we can do is support nested steps. So some of the things we see, for example, allocating a node. We might want to allocate a node, a windows label right here. And you can see that then you're able to add child steps to, you know, to this particular node, much like the, you know, just the declarative has the steps nested and in a very obvious way that this works pretty similarly. So let's go ahead and add a step in here. And this could just be... Oh, got that. Okay, and then if we go back out to the, to the top level stage here, clicking on the build quick, you actually can see all of the steps, including all of the nested steps here very quickly. So it makes it easy to see exactly what's going on for that particular stage. Now we can go ahead and add a serial old stage by clicking the, you know, the end here. It basically brings you to something you can start typing with. So let's say this one is test. And I'm going to say that's, that's just fine for now. But you'll see that the editor actually shows now that there's an error. It's gone ahead and it's continuously, you know, at appropriate times validating this pipeline against the the declarative validator. And so we'll see here that something happened and so I can just click here and I see, oh, at least one step is required, right? So let me go ahead and just fix that for now. And this might be, you know, let's see. Okay, and I'll see why I did that second here. But so now I've got, you know, I've got a particular test and it's for the Chrome browser. And my errors went away. Everything seems, seems all fine here. Now, if I want to add parallel steps, I can just click the appropriate, you know, icon aligned with one of the other stages. So let's say I want to add Firefox tests and I'll go ahead and just add a script because I know it needs one. Okay. One other thing here, change the label on that. So now we have a, you know, we can see we've got a quick build and we've got a couple of tests running on Chrome and on Firefox here. And this is going to correspond exactly to what gets displayed in, you know, in the, in when it, when it runs. That's sort of, you know, the gist of how this works. A couple of notes here, you'll see that I've got a nice custom editor for a shell script. And the other steps, if I go in here, add some other sort of step. I've got a print message. Let's say it gives me, you know, some sort of a editor here. These are actually dynamically generated. So it's not really much work at all when we add new steps in. And we can just sort of use the step metadata to be able to figure out a quick editor to show everybody the option to make everything a lot nicer. We probably will tweak some of the things, you know, the more useful things in the future. Now, the other thing about this preview, the way the preview works is you basically just have a new and a load save button. And so the load save button takes what your current pipeline is and converts it into the declarative script. So when you click it, you just get a text box that's been converted into the declarative script. You see I have a tracker image, you know, it should be 2.12. It also lets you just go ahead and paste a declarative pipeline right in there. So if I go ahead and just copy one from elsewhere, again, paste that in here and click update. And you'll see it's updated that with the declarative pipeline that I just used. So that's more or less how that works. The new actually provides kind of a very basic pipeline with a few single parallel step and a couple serial steps right there. So that's more or less the state of the editor here. So let me switch back over to the slides. All right, so like I said right now there's basically a basic pipeline functionality. The step editors are dynamically created based on what the metadata of the steps you're looking for, sort of the same way that they're being validated by the declarative. And most of the things just work. There are a few data types that some of the steps require that are not supported. So you'll see a message in the editor about that. And as I said, there's the ability to make custom editors where it makes sense. So, you know, having a big script editor or something else and probably a few of those will want to go and tweak in the future. And just a quick example of that. This came up, I think, last week somebody, you know, sent a message on the chat or something about, you know, there's this with Maven step. What do we need to do to get this working in declarative and what do we need to do to get it working in the editor? And so I just reloaded it up and it looks like, you know, everything's just fine and it, you know, gives you all the properties and whatnot. This is one that might be nice to clean up, but the fact that it just works out of the box is pretty useful. So what's next for the for the editor? And of course, this is all subject to change. This is, you know, some of the things we've discussed as plans and that sort of thing. So, you know, up next we're looking at basically round tripping with multi-branch tools, adding a run replay and of course supporting more declarative features. Adding in a beginners tour because, you know, again, what we really want the editor to be is a way for anybody at all to be able to get started with pipeline, whether you're a beginner or an intermediate user or an advanced user and just prefer to do things that way. So that's what we're, you know, really looking to do. And of course right now the preview doesn't support rearranging things and we'll get that fixed up, of course. So round tripping, you know, basically to describe what that is, we're looking at taking existing pipeline jobs and being able to edit them or, you know, if you've got an SCM somewhere, you know, a Git repository, being able to take a repository that doesn't have a pipeline job in it at all, go through the editor and create one and save that back. So we want to be able to, you know, to make it very easy for anybody who's got a Git project, let's say, or a GitHub project especially to be able to save things back to the SCM. Now, just to give a little context about how, you know, how the preview is versus how things ultimately are going to be. Right now, like I said, the preview has this kind of load save feature where you can get the pipeline and you sort of can copy and paste because some of the other things were still waiting on being fleshed out all the way, especially with, you know, saving back to the SCMs. But ultimately, you know, there's this is tied, you know, directly with what we call the creation flow. And so what happens, you know, in an ideal world is somebody, you know, comes in and goes and selects, you know, I use Git to store my code in and they go and choose that and then it detects if there's a Jenkins file or not, and they might want to create one. And so then, you know, what's going to happen is the flow is going to change, you know, how the preview works to be, you know, tightly coupled with the creation and then also just with, you know, editing and looking at branches and things like that where you want to go and change things. But it really ties together, you know, the usage of, you know, the editor with your projects. So, you know, just a really quick screenshot here is that, you know, the editor basically the creation is being worked on. So you, like I said, you kind of select where your code is and optionally go and ultimately create or edit and you have the option to use the editor there. Another thing that is, you know, high on the list of things we'd like to see is getting the run and replay done. So, you know, this is something that already exists for Jenkins files, you know, you can go and run and run and go to the run history and then you can go and click the replay button and you get an edit, you know, a text editor at least for Jenkins files. But, you know, we do want to bring that to the editor and so again, for people who want to be using it, it makes a lot of sense to be able to just, you know, click a button essentially and say, okay, go ahead and try this one out because when you're developing these, you know, these Jenkins files, you want to be able to do quick iterations during development. You know, upcoming still are more features. We're looking at doing a better experience selecting things. So for example, Docker images. Right now there's, you know, a text box that you can you can just go type something in and it's pretty easy to know what you need, especially you can go search the Docker registry or maybe you've got a local one. But, you know, but we'd like to make that maybe even nicer and try to see, oh, do you have a, you know, a Docker registry configured somewhere that we should be looking at and give you some, you know, completion on it or something like that. Also, when you're editing steps, we'd like to make environment variables available. So, you know, at least the things that we can determine based on Jenkins just as, you know, stuff to make it easier again to do the right thing the first time. And we're going to make, you know, input steps probably a bit better and credentials kind of to first class citizens in ways that make sense as well, I think. And then, of course, we're going to add a beginner's tour. So, you know, the things that you've probably seen at countless other websites, just something that walks through what are all the features are and walks through somebody creating a simple pipeline so that they can get an idea of exactly what they should be doing. And then a little more detail about what's coming up next. These are probably the shorter term things that we're looking at getting done. And most of this is going to be to align with with the declarative itself. So, you know, post steps and this is a no-brainer. They're, you know, necessary and so we're probably going to have something, you know, a bit like what the step editor is today. And, you know, this is something that we're still working on. The design for exactly what, you know, how we want to present that and how that's going to work best and be easiest to use. Of course, we're going to have some tool configuration. This is one of those things that actually is, you know, pretty helpful for users because, you know, going back to Andrew's example, there wasn't a Maven, you know, 33 tool available. And so, you know, tools are basically just whatever you have configured in your Jenkins instance. And we've, you know, we can give somebody an easy selector to say, hey, I'm going to, you know, be able to use these tools and select what we can do and and just, again, try to avoid errors up front. We're going to add in per stage configuration. Now there's one caveat here, which is that stages are currently on the top level. So basically what you're looking at is in addition to, you know, the global configuration where you can select agent and environment, things like that. Just like Andrew showed, you'll be able to, in the editor, select the top level stage and configure an agent there, environment there and the other sort of things that make sense, you know, post steps as well, and we're going to, you know, look to have more alignment with declarative, you know, again, so configuring run retention having parameters, having, you know, a nice editor for parameters. This is one of the things that sometimes is a bit tricky. So, you know, again, try to do things one step at a time and do them the right way that, you know, is going to make this a pleasure for everybody to use who wants to. You know, adding support for the when stage or the when condition within stages is obviously another thing we need to do as well. And, of course, doing drag and drops between steps and stages or, you know, maybe something else just because, you know, those things sometimes tend to be a little odd and web UIs or just UIs in general. So, you know, whatever we decide is going to make the most sense we'll try to do our best to get that to get that working. And so that's more or less all I have today, but if you'd like more info the preview of the editor that I showed you is currently available in the Update Center. It's the Blue Ocean Pipeline Editor, so you can install that make sure you have Blue Ocean B23 installed if you do because it's it's sort of tightly coupled with the various pieces there and GitHub URL right there you can take a look if you're interested as well. All right, so like I said, that's pretty much but any questions? Hey Keith, can you help explain what round tripping means? Sure, yes, so maybe I should go back here and see round tripping. So, round tripping, like I said, is basically editing existing pipeline jobs or creating from scratch. So, in an ideal world what we would like to see for the experience to be, for everybody who would like this at least, new users, especially people who want the easiest way to get started round tripping is kind of a key part of this and what that means is essentially you go through a flow, the creation flow and it prompts you where do you store your code, so you might be storing code on GitHub for example, so if you are, you pick GitHub and maybe go and say I want to build one Git repository, you'd select that repository and at that point basically Blue Ocean would look and see, okay is there a Jenkins file? Yes, I could start building it but the thing is over time you might need to change that Jenkins file or you might not have one to begin with, so in the case you didn't have one to begin with, you could still point to repository which right now that just gets similarly ignored by Jenkins but you could point to that repository and say I want to create a Jenkins file, when you do that you could use the visual editor and go and create your Jenkins file basically at that point maybe even running through some different tests and things and then when you're happy with it save it back to the source control system and this is like one of the key kind of things to do because it eliminates the need to kind of go and edit some things and then copy and paste and edit a different file somewhere else if you're using the visual editor especially because it's all just sort of right there in Jenkins but the roundtrip really is just taking a Jenkins file or making a new Jenkins file putting it into the editor and then being able to save it back into the SCM where it's going to start controlling your builds and you're going to have all of the the benefits of multi branch going on so hopefully that answers the question there. Thanks Keith so most of the questions has been answered offline so I think we can just go ahead and close this this online cam. Great. Thanks everyone for joining today I encourage everyone to visit Jenkins IO go to the documentation section read the blogs Tyler and Liam are doing a lot of updates there to provide more information also send email questions to Jenkins so feel free to ask and hopefully we'll get answers to you promptly. Thanks again for joining and please feel free to share this with anybody else in the company on the YouTube recorded content and if you have any questions feel free to ask and hopefully we'll get answers to you promptly. Thanks again for joining and please feel free to share this with anybody else on the YouTube recorded version of this. Thanks and have a great week.