 like for instance, I baked Apache HTTP client. It is very common or it's very used across multiple plugins. So if the plugin that I'm using, it depends on that particular library. I can upgrade it just for my plugin or it has an impact on remaining plugins. How? That's a very good question. So I think what you're asking is, what is the downstream impact if my plugin declares that it needs a newer version of one of its dependencies? Is that a fair way to say what you're asking? Yeah, okay, so let's take the specific example of the Git plugin. If it says it needs, it will require at least a certain version of promoted builds. Then what that means is when users install the new version of the Git plugin, if they don't have at least that minimum version of promoted builds, they will have to upgrade to it and Jenkins will offer the upgrade automatically to get that new version. So if I mandate a newer version of dependency, then when someone installs my new release with that mandatory newer version, they must have at least that version installed. They can have a newer version than that, but they must have at least that version. Did that help? Yep. Okay, so for me, that highlights one of the real benefits of the Bill of Materials. Before I implemented Bill of Materials in the Git plugin, for instance, I was terrified to update dependency version numbers because I worried that I was going to break someone. I worried that I was going to force them to upgrade to a newer version of a plugin. And I spent an unjustified amount of time worrying, oh, do I dare increase the dependency version number of, let's see, my examples were of the matrix plugin. Will I break someone because they depend on a very old version? Well, with my transition to the Bill of Materials, the decision is no longer in my hands. It's whatever's in the Bill of Materials. And what's happened is that's promoted the decision amongst many, many plugin developers that they will all bias towards depending on recent versions of plugins. So all of a sudden, the entire Jenkins community is getting the benefit of more frequent upgrades to plugins and getting more people on largely the same versions because of this Bill of Materials change. So for me, the Bill of Materials has been not just helpful to me as a developer, it's been helpful to the users because they're tending to get more of them on similar version numbers. So did that address the question, Marcel? Yeah, perfectly. Super, thank you. Other questions? I maybe can share my experience. Just a short. Yesterday, I have reverted my Jenkins installation to the previous LTS version. And about 20 plugins didn't match the version of the previous LTS release. So it wasn't possible to step back one LTS release. Maybe it will help someone in the future. Were you going from 289 back to 277 or from 277? Okay, so, and that has certainly been a big transition. Now, do you track or have you attempted to track your configuration with configuration as code? No, it's painful. I really like to use configuration as code, but we deployed Jenkins the old way, not in a container, but installing in the file system as a war file. And all the dependencies lay in the file system in Jenkins home. And there is no way to, you know, just to tell, okay, with this new version come this version of plugin because they are already there. Right, right. And it is impossible to apply configuration as code in such manner in container world. It is, I think it is easier because you start a new container. There's such a great feature that you can install all needed plugins for this particular Jenkins version inside this container and run it, but not if you install the old way, the file system way. And this is a problem right now for us. So I was living in exactly that kind of a world and I agree with you wholeheartedly. I installed from a war file, I was using in my case the Debian package, right? So I installed the Debian package onto my Debian or my Ubuntu, but I was able to find a path that let me eventually move towards that. And I wonder if it might be worth your considering one of the things that, okay, I don't wanna disturb my production instance, right? It, we're gonna continue managing exactly. But what I found was that if I took a copy of that instance and attempted to build myself a preview of it or a prototype of it in my case, I built it in a Docker image. So I ended up taking with R-Sync a copy of the plugins directory and the config.xml files for the jobs and various other config files and then one little file at a time, one piece at a time. My separate copy got a little bit of configuration as code in one segment and a little bit of configuration as code in another. And I spent months of slow progress getting there, but those months of progress ultimately ended up with, I was able to confidently replace my war-based slash deb file-based installation with one that runs from a Docker image on the same machine because I eventually got them synchronized well enough. So it's now, now if you say, oh, I don't need or want a test environment, then my technique may not help you. But if you're interested in it, I can paste into the chat a link to the thing that I use. So maybe my tooling would give you a little pointer. What we do is exactly what you have described. We use our sync to copy the whole Jenkins home directory to a new virtual machine. And we start a newer version of Jenkins and start the migration on this test machine. And if it's go right, then we make the same thing on the productive machine. But you brought me to the idea, we should not copy the whole Jenkins home. Maybe we should make it smaller pieces copy, not everything but parts of it. At least for me, it was a positive experience to incrementally start from wherever I was, right? With everything managed the way I was used to managing Jenkins with everything done from the user interface. And I clicked through web pages to make every change. But then that our sync and copy those contents into a repository was a great help. It turned out very, very useful to me and it's made my development better. All right, the embarrassing thing here is not just made my Jenkins installation better, but it's made my experience as a developer of Jenkins components better. Because I can now easily drop my test code into something that looks very, very much like my production environment. Yeah. So if you're interested in the technique, the incremental move towards configuration as code, I'm gonna put this into the chat. And now this is truly evolutionary embarrassment publicly displayed, all right? And if you look at the history of this particular repository, you will realize that Mark Waite hangs his head in shame at some of the mistakes I've made in going through these evolutionary transitions, right? It's, oh, wow, that was foolish. Well, but that works, oh, but that was bad. So you're welcome to that. In my case, the concepts were of incremental improvement were so valuable that my mistakes just didn't matter, right? It turns out that incremental improvement made it so that the end result is better even if I made a bunch of mistakes on the way. Yeah, we learned from our mistakes. Right. Yeah, cool, thank you very much. All right. Surely will help us or me. Well, and for me, it's a reminder that the other thing that that experience of having a readily available thing that I can stand up that looks really close to production is that it lets me very rapidly get into interactive testing of some change I'm making. So we were just doing a Google Summer of Code project and that Google Summer of Code project made an important change to the Get Client plugin and I needed to test it and it was minutes and I had that thing deployed into this environment that has thousands of jobs and has interesting configurations and is known to have problems in places that most people don't have. So for me it was, but it's been an investment, right? I mean, doing incremental transition from old to new has taken time. Yeah, sure. All right, so looking to it. Great, we've talked about we've talked about development and transitions for plugins. Are there other things around plugin transitions? Let's see, Jonathan, for instance, on yours, you mentioned that you've had to upgrade and find your way through how to bring an old plugin to be current and the plugins that are up for adoption commonly have that exact problem. They need to be updated and you've got to explore, okay, can I update it to depend on a modern Jenkins version and what will that do to the code? Okay. Yeah, thank you. Yeah, I think I'm going to use Marcel's idea and take a look at some of the plugins that my instant uses and see if some of those are in the up for adoption state. And I like that very, very much. That's a great way to be able to have a good conversation the other day with someone who worried, hey, but I can't give 40 hours a week to this kind of thing. And I think that plugin adoption is a place where you could give 30 minutes or an hour a week and do significant work, realizing that if a plugin has been placed for adoption, right now that means there aren't people working on it. So anytime you give to it is time that is net benefit. Now, if you'd like, we could look at Jenkins Pipeline. Do any of you have pipeline experience? Have you made the transition to pipeline or as much of what you're doing related to freestyle? My, I'm old fashioned. My stuff is all still freestyle, but. Okay. I have a little combination. I'm using pipeline, but also I'm using the library. So I'm provisioning common functions there through groovy mostly. And then I make uses of those from my pipelines. So, okay, so you're what I would call even in a relatively advanced thing, you're using pipeline shared libraries so that you can have simple expressions of pipeline in most pipeline files that call something a little more complex in the library behind it. Right, yeah. Also, yeah, since we have everything on top of Kubernetes, we are using a configuration as code and all of that. So we, at least that issue in which we have to replace a grade or downgrade, not that we haven't needed to downgrade, but yeah, that's something that you should look at for sure, Valentin. Yeah, that's made easier life. And now in your Kubernetes environment, are you managing things there with Helm files or with YAML directly? What's been your preferred way of, or are you using the Jenkins operator? Which, what's been your preferred way of deploying all the way to Kubernetes? No, so when I started to you implemented this on Kubernetes, I thought that the Kubernetes, the Jenkins operator, it was much sure enough. So I went to Helchar. At that time, it was hosted on the Kubernetes repo, but now it is, as we know, on the Jenkins IO Helchar. And that made easier things as well, because it is just matter of changing a YAML file a value, sorry, a value for the Helchar and then the whole thing roll over, pretty easy. Yeah, Hel, it sounds more a better approach than just using crawl YAML file. So now in your roll forward and roll back experience, have you found that there were unexpected barriers to things that you could share with us to say, hey, it would be better if we did this or our experience would have been better had this been in place instead? So one of the things that I have been thinking is that I need to improve the way that I am rolling out new grades because I have found in the past that I am just grading the LTE version, right? on the Helchar value. And then there is a dependency of that on plugins, specific versions. And then since I fail on read the migration process, my grade fails. So yeah, I have been thinking of having like a test environment just to see how the grade goes. And then if everything works fine, I could be moving that grade to production. And I've heard a number of people who use that kind of a staging technique that you're describing who evaluate something in staging. I think it aligns with the way Valentin was describing they're doing theirs. Your Kubernetes technique doesn't use our sink as I assume going from code. Now in your image definition, one of the things that we learned, I guess it was six or nine months ago was that many users are making the suboptimal choice of declaring their plugin versions in the definition of their thing, but not installing the plugin versions. So when their new Jenkins installation starts, they were downloading the plugins at startup time. And that's both expensive and slow to process and a risk to you because then if the Jenkins update site is down, your Jenkins is stopped waiting for it to upgrade. Do you know, are you using a separate Docker image to define your Jenkins with plugins or you're defining them? Okay, so you've got... That's another process that I need to improve. So yeah, I guess that the recommended way, I think that I read it on the wiki or somewhere else that I should build my own Docker image with the plugins that I need. Yeah, in terms of provisioning time, it will reduce the provisioning time. I found that that's a little risky just to install on the fly. Even though if you are not upgrading, if the pod just gets killed, you will be getting install all those plugins again. Oh, right, I had not even thought of that. The reality is on a pod restart, you get a reinstall of those plugin versions and that's expensive because during a pod restart, you want that thing back as quickly as you can. Yeah, although I should mention that the start, the bootstrap process, I will say that it doesn't take more than five minutes to be honest. Okay, so it's... But yeah, there are occasions in which a plugin could fail on that because instead of using a specific version that was long time ago, a specific version I used the latest version of the plugin, declared on the configuration as code. And then what happened is the controller was restarted and on the bootstrap process, the plugin was installed with a different version because I have instead of a fixed version, I have the latest. Right, right. So now you got an implicit upgrade even though you hadn't expected to get an upgrade at that moment. Yeah, yeah. So yeah, having the Docker image with all the plugins, that's for sure one of the thing that I want to do for the reason that we have said. May I ask a question? So if you have a new image with newest plugins, for example, six weeks past and there is a new Jenkins LTS version available, you built a new image with newest plugins and you want to bring it into your production or test environment, what you do is you have a volume somewhere on the file system bind to your image. And there is plugin information, there is for older versions of these plugins. Are you deleting this directory or how you approach this update scenario? So currently I am not doing anything in regard to the volume. Usually I believe that mostly it is being taken care by Jenkins itself. But what I've realized is that in sonocation when you grade the Jenkins, you can get a warning like saying, you have a null or legacy configuration related to those plugins, do you want to keep it or delete it? That's why I believe Jenkins is taking care of those, so you don't need to clean up. I guess that Mark, you have more information about that. But yeah, that's what I have seen. I have an experience related to the configuration. Yeah, there's one more problem with it. If you deleted a plugin from plugin TXT and built a new image and you start Jenkins with this new image and bound volume, you still have the old plugin installed in your Jenkins environment. Right. And you cannot get rid of it because it is in your file system that's bounding it to your image. Well, and so that was why, at least from my usage, I've preferred to have the Jenkins, the base Docker image includes the plugins in the image and it's not a separate volume, right? It's absolutely just part of it because what I wanted was I want the ability to know that the thing I described, I could go back in time if I had to and build it again. Now, this for me is actually a relatively recent thing because I spent the longest time doing exactly what Marcel was doing of using latest. I always want the latest plugins and so I wanna stay with latest. It turns out that this new tool called the plugin installation manager has some automation inside of it that will help me maintain the list of, okay, I was lazy. I didn't wanna maintain the list of exact plugin version numbers manually. That was just, I've got 150 plugins and tracking those version numbers was just unacceptable. I couldn't imagine tracking those numbers but the plugin installation manager tool is this Java program that will generate the exact list of plugin name version pairs and write it to the file for me. And so what I've got is this ability to say run one command that says, tell me the current version numbers, write it to a plugins.txt file as exact version numbers and another command that says, now go download exactly those versions from the update center. So I was using the exact technique Marcel's described of latest for the longest time for years and it worked just fine but it meant just the way Marcel described it, sometimes I would get silent upgrades of my plugins and I hadn't thought about that and didn't know how to go back if I wanted to go back. So, but for me, the magic there was the plugin installation manager tool and I found the set of arguments to pass to that thing to use it so that it would maintain the file for me because otherwise I would never have tolerated maintaining all those version numbers. I'd have never kept up to date. Yeah, it's almost impossible with 150 plugins. Now there is a different technique and there's a different technique that the Jenkins infrastructure team uses. The Jenkins infrastructure team uses a dependabot configuration that will watch the Jenkins update center and propose pull requests to their plugins.txt file for new versions. And so they're still tracking exact version numbers but if you're interested in that, I could probably paste you a link to that one if you say, oh, I wanna use dependabot to track these things. That was an interesting technique that the infra team found. If I understand this correctly, dependabot works only with GitHub. That's correct, yeah. So if you're using, if you're locally hosting or using a key or using BitBucket, then it won't help. Okay. Petey for us. Well, but that's where the plugin installation manager tool will work for you. And work just fine. Maybe you can post a link about plugin installation tool. You bet. Okay, cool. Yeah, so let me... Only tool I know is the Jenkins CLI jar. Sorry, go ahead, Marcel, what was that? I would be interesting to see how the infra team is using dependabot for that. Yeah, that's very interesting. Super. And to be honest, I am struggling with what you just mentioned. I need to go over every plugin version when I am grading. I think that there was a major grade on the LTE version some months ago. So I went through all of those. And also, especially because I grade it from JDK8211, that was another milestone on that LTE, yes, I believe. But yeah, I have to track all the individual version for plugins. Okay, so you're headed in that direction. You see that coming. Yeah. Okay, well, so let me... I'm gonna paste a link to this thing that I use to get available updates. And yes, so here we go. So in the chat session, whoops, where did I put it? Here is the Python code. Okay, so I'm a Python scripter. So Python script that calls the plugin installation manager tool to maintain the precise plugins.txt file. So there's that one. And then let me get the Jenkins info reference because I get those pull requests all the time upgrading LTS plugins. Upgrade LTS. See, maybe it's called plugin upgrade. Sorry, I'm having to look, have to look in trash. I may have thrown them away just a minute. Maybe the word is update. English is too fluid. It allows too many synonyms. Okay, I will have to take a separate action item. I'm not finding it in my search and I know I get them all the time. Plug in update. Jenkins, infra, plugin update. I am so sorry, I'm not finding it. I know that I get this message all the time. And so I'll, I will let me take an action item to gather that. And if you're willing to actually let me paste my email address. If you're willing to send your email address to mark.url.weight at gmail.com and I'll share the link to the Jenkins info repo. There we go. And I am feeling awkward now because I should be able to just find it. Let me look for it with a slightly different technique. It always happens when you're trying to do a demo live. Right. That's the price I pay. So I think it may have the word Docker in it. Ah, yes, there it is. Okay, good. I found it. Okay, so if we look at this one with Docker. Yes, I found it. Oh, I'm so proud. Okay, good. See the Jenkins infra LTS upgrade process. At this thing. And I'm going to go ahead and share my screen and let's take a look at it just so you can get a sense of how it operates. So here is the, you should see the Docker Jenkins LTS repository now. Do you see that? I do. I do. And in the .github directory, here is. So in the .github directory we have three workflows enabled. So it depend about is here that runs GitHub actions. And then in the workflows, we've got update.yaml, which does this operation. Let's see. So it generates the token and then it runs on the update plugins branch, this operation here. Oh, no, here it is. It's this one. It's this Jenkins infra UC. This tool is a thing that looks at plug-in lists and generates an update to them. And the result is written as one of these pull requests. So here it says chore dependencies, update plugins and what happened is it's proposing a change from warnings ng9.2.0 to 9.2.1 and from workflow CPS global live 2.20 to 2.21. And a human being didn't have to do it. It just did this on its own. This is gold for my eyes. Great, great. That's excellent. Thank you so much. Well, and for me it was, this is work that Gareth Evans did as part of Jenkins infrastructure and it's been absolutely wonderful for us. It helps us maintain things better and keeps us up to date. Mark, one more question. Does it update each plugin to the latest version or some particular version? So the thing it's proposing is it proposes the most recent version but it's a little more sophisticated than that because it's proposing the most recent version that is supported with that Jenkins version. Okay. So for example, it is perfectly legal for a Jenkins plugin to declare that it requires as its minimum version, version Jenkins 2.299. And when I try to install that from the latest LTS, which is 289, it will correctly say, no, you can't have that because it's too new. It requires a much newer Jenkins version and this tool will not offer that update because it's not supported with the Jenkins version that is trying to match. Okay. Yeah, so very good question. Very, very good. So there's Intel behind. There is and that same intelligence exists in that plugin installation manager tool that I was describing earlier. Right, it does the same thing in order to help, in order for it to make a recommendation you must tell it which Jenkins version you're targeting and it will then use that information to provide the list for you. Cool, cool, very cool. Yeah. In your case, Valentin, if you are not using GitHub, you should be able to look at the implementation. Yeah. Right. Of the GitHub action that Mar share. Yes. This is implemented in using Docker. Okay. It's a go application actually. So you should be able to use this implementation on your side as well, regardless you are using GitHub or not. Okay, okay. I will be looking into it. Well, and Marcel makes a very good point that most of these things are done exactly that way, right? Where this, let me share that screen again just to show because it's good to, so if we look at this update plugins pull request and we go look at the repository in the .github workflows, there's this entry that says Jenkins info slash UC. Well, guess what? That as Marcel correctly noted is just a Docker image. This is image. And if you say, oh well, where is that coming from? Well, guess what? It comes from a repository in Jenkins infra called UC. Yeah. This is what Marcel posted into the chat. Oh, oh, very good. Okay, great. Yes, so there it is. And this, yeah, it's perfect. Thank you. Oh, yes, there it is. Thank you very much. Thank you Marcel. Cool. Yeah, that's how I learn things. It's an occasion I look at the GitHub actions that people are using. Not that I'm using GitHub actions. We use Jenkins, right? But yeah, it's an occasion I learn how people are doing certain things and if it's not, yeah, if it doesn't fit on my infrastructure, I just adapt it. Excellent. Very, very good. Thank you. Thank you very much. Yeah, thank you. Are there other topics that we would like to touch on or other things that come to mind as a question? I think I'd like to add something. So, yes. So as a newcomer contributors, some of people would not know that how to go and check out all the available GitHub channels that Jenkins has. So I think if I paste the link for Jenkins ci.com on the chat, so people can look at it obviously those who are present in the meeting right now. So they can look at it and join the available GitHub channels that they like to be. That's excellent. Let me, I'm gonna go ahead and share the screen so we get a screenshot of it. So what this shows us is, or dear Raj, maybe you can describe what we're seeing here. Exactly. So these are the Gitter Chat channels, right? That are available to Jenkins users. So if you've got a question about configuration as code, there's the channel. If you've got a question about, let's see, what's a, oh, we've got lots more to go through. Let's look for the plugin installation manager tool. Yep, there it is. Or maybe you've got questions about the Jenkins Git plugin. Here's a group that focuses on that. And it looks like there are many, many, many chat channels all focused on different parts of using Jenkins. Thanks, dear Raj. Good suggestion. I might stop share ago. There we are. So I'd like to share one very small experience that I think might help someone who's, who's in the same position at me that has been new to contributing. So when I was learning about configuration as a code plugin, I was very, I still am very interested in it because it works like a magic to me because, so it's really cool. So I came across one technique that in order to configure the plugins using Jcast plugin, we need to know it's YAML syntax, right? So everyone does not know how to write the correct YAML syntax for configuring a particular plugin. So what they can do is go to, their own Jenkins instance and configure it and come back to the YAML file and then copy paste the code. So I know it will not make sense to someone who's new to this. The, I'm trying to guess that one on Girechana know that who, who is it me? And I assume that people would know this very easily. So the point I'm trying to make here is that there's always something that you can contribute from your end. And if it's not going to help everyone, it's going to help someone for sure. So you need to volunteer and bring the idea to the Gitter channel. And we can discuss more on that and we can put it and publish it if it helps anyone. So welcome for the contribution from that area as well. Excellent, well, and Dheeraj, Dheeraj you did a great blog post actually and a video on that configuration as code experience, right? And that blog post and that video were good highlights that, hey, the experience can be much simpler, much easier if you use these techniques. Exactly, and I will post myself by saying it has more than 400 views now. Congratulations, that's great. So your video, your video's being seen. That's very good, excellent. Yes, it feels good to know that people are finding it helpful. Now, I believe several of you had noted that you're on the way to pipeline. You've got a mix of freestyle jobs and pipeline jobs. If you're okay with it, I'd be happy to do a brief demo on some things that I think you ought to be aware of as pipeline capabilities so that you don't miss these capabilities as you're considering, should I try pipeline? Would you be okay watching a little demo, letting me go through and talk about pipeline a little bit and trying to sort of introduce the concepts and then show some demonstration of what you can do with pipeline and what many people may miss as capabilities that are available in pipeline that they didn't realize were there. Yeah, that'd be good. Okay, cool, so Marcel, any question from you or concern there? Yeah, I have to go now, sorry that I have another meeting but I would like to ask, so I believe someone mentioned you're gonna be a demo on this pipeline graph view plugin? Later today? I think that's later today during the, I believe the pipeline graph view plugin will be shown during the, oh dear, what's the title? During the Ignite talks and demos. Okay, that's an entry on the agenda, right? Although if you would just like, yes, it is an entry on the agenda. It's scheduled to start in about, let's see, we're at 10.30 now, so eight, nine, 10, 10.30. So it's scheduled to start in about 90 minutes but Marcel, I can also paste a link to you for you of an existing demonstration of pipeline graph view that we had a video of. That way you could even look at the video separately if for some reason you couldn't come back to attend the Ignite demo. Okay, yeah. Let me see if I can find that really quickly here because that should be pretty easy for me to find as a video, I posted it to colleagues at inside my company some time ago. Pipeline graph view. Am I, Dheeraj, am I getting the name right? Is it, it is pipeline graph view plugin, right? Yes, there you go. Pipeline graph view, yep. Right, okay, and now where is the video of it? Pipeline graph view for, pipeline is, okay, so I've got to look for it a slightly different way, youtube.com, Jenkins playlist because what it was was we had the author of the plugin presented it in a brief demonstration I'm getting feedback. Is anybody else getting feedback? Yeah. All right, so we'll continue. Yeah, sorry about that. I don't mean to be echoing. Okay, so let me see if I can find that just really quickly because it should be in the playlist. Yeah, we have, we are, we are using it. I intended, I intended to use it because I wanted to use it for a long time. I intended to ask questions about some limitation. Yeah, some limitation we just implemented a new pipeline and we found that the parallel execution of several stage they don't show properly even though on the classic view it is presented properly. Okay, so maybe this is a limitation or something so I intended to ask. Perfect, if you're already a user of it then this video I was going to link you to is no help. So about 90 minutes from now, join the ignite session and we will look for the demo there and you can ask your question there. Perfect, thank you so much. Thanks Marcel, thanks very much for joining us. Have a great day. Thank you. So I was going to go ahead and show some things relative to pipeline. Let's first give a brief, I guess a simplest way to say it is a brief look at some of the concepts around pipeline. And let me see if I can find my slide deck to share. Not that one, too many tabs. Ah, yes, here we go. Okay, so this was just a very beginning kind of thing. So the idea here is that Jenkins, whoops, copy the link in case you want the slides. There's a copy of them. These are no way polished enough to be claimed to be perfect but they're a beginning. So you're familiar, oh, present. With pipeline, with freestyle jobs, we configure them for the web browser. It's really easy to do. We store them inside Jenkins, makes them easy to change but it's not nearly as easy to see what's changed or why it was changed. And we don't get a lot of help from people. They just made a change and they went on. And it's strongly dependent on plugins. And if a job starts, if Jenkins stops, the freestyle job stops as well. There's no way of continuing it to run across a Jenkins restart. Jenkins pipeline jobs are configured from a source repo. So you configure them as code. They're right inside your source repo so the job definition is not embedded in Jenkins. Storing it in your source repo makes it easier to maintain, easier to see what's changed and gives you get comments from the people who made the changes. So it's using a pattern you're accustomed to and it puts the burden of the work predominantly in your build scripts instead of requiring that you find a wide set of Jenkins plugins. So they're also able to continue running across a Jenkins restart. They're able to run in parallel. They're able to run on multiple agents and they're able to run with multiple software configuration management systems in various interesting combinations. Very, very flexible and very capable. So there are two domain specific languages that are implemented in pipeline. One is declarative and the other is scripted. Declareative is the second generation of pipeline language, if you will. It's intentionally simplified, intentionally designed to be managed and read and implemented by people who may not be precise programmers. Scripted looks an awful lot like Groovy code. It's a DSL that's derived from Groovy. It is more difficult to read but it's got a larger feature set. And now for me, the cool thing about this is that these domain specific languages are dynamic. The keywords that are used in the languages, the steps or the tasks are defined by the set of plugins that you have installed. And that means the pipeline snippet generator and the directive generator can let you use exactly what's in your system. Now it's time to stop the slides and let's get to a real demo because this is where I think it matters the most. So let's look at my Jenkins installation. Whoops, wrong one. My Jenkins installation here. Okay, this is a real Jenkins installation. It's Jenkins 2.289 pre-release. It's got 30 or 40 agents connected to it. Some of them are dynamic from the cloud. Others are here and there and everywhere. So a real Jenkins. When I want to define a new plug, a new job and I want to put pipeline into that repository, I just click open blue ocean and click new pipeline. And now it asks me, which, where do I keep my code? And, okay, Bitbucket, Bitbucket server, GitHub Enterprise, GitHub or just Vanilla Git? And in my case, let's see, do you have a preference? I think I've got accounts on most of these. I don't have a GitHub Enterprise account, but the others we could, you want to use Git, would you like to use GitHub? So, Valentin, let me ask you, which repository management system are you using? I'm using Bitbucket. Okay, good. So let's use Bitbucket cloud. Okay, now I've got to get some username and passwords. So for this, I'm going to go to Bitbucket cloud. Let's see, and so Git, Bitbucket and I may have to turn off screen sharing briefly if it prompts me to enter a password. Let's try continue with, oh yeah, let's try this just to see if it's connected to my Atlassian account. That way I didn't, okay. Now I have to insert my security token. Oh, I'm so proud of these things because I've got two factor off. Yes, okay, good. Here we go. All right, so now let's look at various repositories. So here's a repository. Now this one already has a Jenkins file in it. So it's already defined. We can just use that one and let's try that. Or if we would like, I could create a new repository that doesn't have, let's see, maybe we should, well, let's take that one. So I need to get a username and for this, I need to copy my password. Okay, now I'm going to temporarily suspend sharing in case it were to publish this password visibly. So just a minute, stop share. Okay, and I think I'm marked up URL.wait. Okay, and it did not show my password in plain text so I can start sharing again. We've got a new visitor, Abhishek. Thanks Abhishek for joining us. So now I'm going to share my screen again and let's look at, okay, so here let's try this connect. We'll see if I got the right password, et cetera. Connect, hmm, I'm not seeing what I wanted there. I wonder if maybe I gave it a bad password. Let's check my account here because it may be, Valentin, do you remember, does Bitbucket require that I use something other than my email address? Maybe I should check my profile, huh? I'm not sure, I think username and password is the thing that you need. Yeah, so okay, so let's see what I've got as my username. We're going to try a different username and if this doesn't work, we'll try, we'll switch to use GitHub. Okay, so there it says invalid username or password. At least a feedback. Yes, exactly, so we're going to switch. I'm going to use GitHub for this one, just for the moment. It's okay, yeah. This one, I know it already has my credentials in it. So here, if I use GitHub and say marquee wait, now it lets me choose one of the repositories and so for instance, I could choose the repository name Bin or I could choose some other one. Let's see, how about, let's go looking. Let's take Bin for now, Bin and create pipeline and it's going to tell me on this one, oh, I've already got a pipeline, I'm going to start using it. But then we're going to use this exact same set of tools to edit that pipeline and make some changes to it. So it found not just my master branch but also two other branches and ran work on those two other branches and here, one of them's already finished, the other one's finished and there we see it. This is all just part of Blue Ocean. And pipeline graph view, the one that Marcel was referencing gives a similar view to this with a much lighter weight, still very, very new environment. So here we've got this, notice in the top right-hand corner there's this little pencil icon. So I can click that pencil icon and it puts me into an editor that lets me edit my pipeline. So here is the build step that I defined and it has a message that says building and I'm going to say building the master branch because I'm on the master branch, I'm going to save and now I'm going to go back here. In the test, it says print the message testing, I'm going to say the master branch and it adds a warning badge and then I've got a deploy step where it says, oh, let's save the artifacts. All that I can do from this Jenkins Blue Ocean interface, it's that simple. If I said, oh, I want to go parallel, okay, I need a second test. And here we are going to add a step that is print the message second test step in parallel. So there's test and maybe we should rename this one instead of test, let's call it first test. All of this directly from the Blue Ocean interface. So I am, here I am defining my build pipeline in a graphical experience that lets me just add things where I need to make comments, et cetera. Now I'm going to save it. And in this case, add a parallel test, change the messages. And I could commit it to a new branch or I could submit it right to master. Either is fine. Do you have a preference? Which would you like, master branch or a new branch? Master branch. Okay, a master branch it is. We're going to save it and run it. So now what we will see now is that if I look at the master branch, you look there's a run number one. And now as I go, oh, there it is. Build number two has started. If I look at number one, you see it's a linear flow, build, test, deploy. Now if I go back to bin two, there's my build, first test, second test and deploy. Now ultimately, most users may not even actually look at these views, right? They may say, look, all I want to know is that the thing finished and published a result somewhere. I don't care about the pretty view, but for me, I find it helpful to do my initial layout of what steps should be in my pipeline from this user interface so that I don't have to worry about exact placement of braces, exact where does everything go? This gives me, now, and if we look at my repository here, we'll see this as code. So let's look at, on my GitHub account, we're going to look at the bin repository. So let's grab one of these and we'll just rewrite it. And here are my commits. Add a parallel test, change the messages. And there it did all this changing for me without requiring that I go into a text editor and do it. So it feels as smooth and easy as a Jenkins freestyle job. And yet it's represented as code inside my repository. Cool. I have a question. Yes, go ahead. The question is, is it possible to run this configuration without pushing your changes to the Git server? Oh, that's a very good question. And the answer is yes. It's okay. So now that's, I would call that almost an advanced topic, but I'm going to go ahead and show it to you if that's okay. So let's use, you would say, well, what if I wanted to just experiment with something and not commit to the repository? Here's this replay facility that shows me, hey, here's step number two. And I realized I shouldn't call it master branch because when I'm on a pull request, it won't be correct. So building, testing, second test, let's see, first test. Yeah, so, and then deploying. Yeah, there we go. So I have changed it. I'm going to say run. Now what we'll see is it's going to do the checkout. It'll, okay, these, I have to admit the steps are all very, very fast, right? Because all they are saying messages. But now if I go open blue ocean and look at that thing, let's look at the message here, building. It didn't say building. So my change was there, and yet I never deployed it to the repository. So this replay button gives me the ability to do dynamic tweaks, to do rapid explorations. Okay, but you make these tweaks in a text editor and not in the blue ocean interface anymore. That's correct. I think so. And I've never tried to do a replay in blue ocean, but I don't think I can. So let me double check that because if I go back here, let's look at number two. And there is a see this rerun button. But when I click the rerun here in the top right hand corner, my recollection is all that does is rerun the job exactly as it was defined. So I'll look at three and here's three. And if we, and now guess what? Four will be available very soon. And there is four and four uses two. All right, so I rerun four and it rerun two and it did it as a new job, a new version, a new number four. So yeah, there is in a way in blue ocean to do that interactive rework and not save to the repository. Okay, cool. It's new for me. Now, there's another really what I think of as, okay, this is, so you remember we were here in replay. See this link at the bottom, this thing called pipeline syntax? That is magical. I'm gonna click that and we're gonna look to see just how magical that is. So in the pipeline syntax link, it opens up a snippet generator where I decide which step I wanna try. Oh, I need to run a Windows batch command. And I would like that to say echo hello world. And then I want it because it's Windows, I want it to say dir slash OD. And by advanced settings, I would like the output to come back as UTF-8. And I wanna know the return value of this thing. So I want the exit status, oops, UTF-8. And I want the return status. So I just with the user interface described what I want to do, right? Interactive clicking to describe what I want to do. I click this generate pipeline script and there it is. Now I can copy this. I can go back over here to replay. And now I have to do some additional things because I gotta be sure that I run on Windows. And now I'm going to insert in there that thing that I just did. So I have just taken code that it generated for me, pasted it into my script and now I'm gonna run it. And now let's watch it to see what happens. Let's see if I'm any good at writing Windows batch commands. So special thanks to my daughter who donated her computer to let me use it as an agent after it got old. Now you can tell that it's old because Nate, notice how long it's taking to clone this relatively small 60 megabyte repository. But this pipeline syntax generator lets me choose what I want to do and then it will generate the code for me. And now that one that I just did was actually relatively simple, right? Oh, okay, it's not hard to write a batch file. When you're doing a checkout and using all of the options of the Git plugin, it's really painful if you don't use the snippet generator. So we're definitely going to use it for this one. Let's, oh, we'll go after a public repository. We don't actually need to authenticate. So now there may be additional settings that I want to add like, oh, I need the advanced clone behavior to not fetch tags and to honor the initial ref spec. And I want to time out in three minutes if it doesn't finish on time. And I need to use a checkout to a specific local branch. I'm gonna name that branch master. All sorts of things like that. And I could keep adding those. And guess what? When I generate the pipeline script, there is all the magic that I needed to do that job. So now if we go back here, we should see, ah, notice, here's that your member, I gave it echo hello world and durr slash od. There it is. So, so pipeline snippet generator is a great way to make your life simpler. It just is. The same thing exists for declarative directives as well. This one where we say, I need to in declarative decide I only want to run on specifically labeled agents like one with the label windows. Generate that, and there it is. And now I could have pasted this into that same replay. Any questions so far? All right. So you have been very tolerant of almost three hours of this. Thank you very, very much. I would like to be able to attend the next session. And I believe it's scheduled to start in about five minutes because I want to talk about Java 11 with others. Any questions you've got in the last five minutes before we end? Hello, Mark. This is Sudesh over here once again. Yes, Sudesh. Hi. In the pipeline script, can you please help me know if there is a requirement for a restart? How can I implement that? So restart from a particular stage. So you want to do a programmatic restart from stage? Yes, that's right. So let's consider you have a build stage and the test stage, right? So in the build stage, like if let's consider that the build stage is successful and the test, there is some failure in the test phase. So how can I ensure that in my next run I don't build it once again, but I resume from the test. So that's a good question. Unfortunately, I'm not sure, Sudesh, that I know the answer because I'm used to using restart from stage at this level. And then now if I remember correctly, we may have to ask Darren Pope or some other friends with more pipeline experience. I suspect what you need to do is in preceding stages, you need to stash the results of that stage so that you can un-stash it, stash or archive the results of that stage so that you can un-stash it in the later stage. So let's try that, shall we? Should we do a little experiment? Thanks, Mark, yeah. So I'm gonna do a run of, oops, there we go. So with this number six, it's saying we restart from stage and it's still doing the get fetch. So I'm not sure how to guarantee you that the restart from stage does not perform the work. Oh, here it is, actually there we see it. Okay, it's showing it. It skipped the build stage and assumed it could run right into test. So restart from stage did do what we expect here. Let's look at it in blue ocean to see. Yes, okay, good. So it did do the skip. So the crucial thing for you Sudesh then is that what that means is you must assure that the build stage has archived its results either through a repository archive like to an artifact or a nexus or through a stash if the results are relatively small and then that you in the test phase un-stash and that's important also because it's possible for a build stage to run on one agent and this parallel test thing to run on two different agents and if you have a dependency on the build results in the test which most of us do you want to have saved it in the build and then un-stashed it or restored it in the test. So here let's try that and let's use our techniques here. So we're going to add a step which is a stash. Let's see, and is it called stash? Yes, stash some files to be used later in the build and the file we're gonna stash is readme.md. Although that one's already, no, what we need, we need to do something that's visibly not there. So how about we would, in this, we're going to create a file and we're going to do that with a shell script. S-C-P, readme.md to readme-new.md, okay? And get clean, before we do that we're gonna do a get clean minus X FFD. So we get rid of anything that might have already been there. So this file is the one we're going to stash. So we add a step to stash and we're gonna stash that file. Now in the test stage, we need to do an un-stash. Oops, maybe they call it restore. Yes, restore files previously stashed and we need to do that in second test phase as well. And I apologize, S-C-P, we're going to run out of time. I'm gonna have to, but let's just do this for fun. I think we should be able to see it and let's watch it. So just to assure that this, there we go. So we've got a stash and un-stash or a restore and a restore. So save that, stash of file in build, un-stash in test. Okay, and this will very shortly launch. So there's six branches. Oh yes, master is building. So there it restored the file and here it restored the file. So did that address your questions, Sidesh? Yes, thanks Mark, that was exactly what I was looking for. Thank you. All right, thank you very much to all of you. Thank you for being part of the Jenkins contributor summit today. I'm gonna drop off and the session recording will make sure it gets uploaded so that it's available. Thank you very much, Mr. S-C-P. Yeah, thank you very much. Thank you everyone. Thank you for joining Dheeraj. Thank you and have a good night's sleep. Thank you everyone. Aditya likewise, it's midnight your time or worse. You're both heroic, thank you so much. Thank you very much. Bye everybody. Bye. Bye bye. Thank you. Take care, bye.