 And we're going to start with Oleg Nanashiv talking about developing pipeline libraries locally. Oleg, take it away. Oleg Nanashiv Okay, so I'm sharing my screen again. Okay, so I'm going to talk about the development of pipelines on local machines in Jenkins community. I work on some things, actually, I'm a member of Jenkins core team. I work for cloud base and I participate in several projects, including LibreCores project. It's a project where we work on open source hosting for hardware and SCI flows. And we use Jenkins and Jenkins pipeline there. So when I was working on pipelines for this instance, I experienced some issues with the deployment of complex pipeline libraries. And I would like to share some of my experience about approaching to such a development. In the Jenkins project, I work on many plugins. So probably you have seen me in plugins like custom tools, role-based strategy, et cetera. I also maintain remote and integrate pull requests in the core. So if you have questions about these components, just reach out to me in the Jenkins RC channel. So today I'm going to talk about pipelines. Pipelines are actually very good if you want to set up a configuration as code for your projects because you can set up a kind of framework by using pipeline-shared libraries and hence you can encapsulate complexity of your built-in definitions. So if you want to build a kind of framework for a number of projects, using pipeline can really save you time. But the problem is with development of such pipeline solutions. Here's a short summary of the current state of pipeline development tools. So there is only one big green block library manager. So opportunity to manage libraries and to download them. There is still no dependency management, but I think that this model is more or less done. But on the other hand, there is a lack of other tools. For example, there is not so much ID integration. You can debug stuff. You can analyze stuff in ID before submitting. And of course, there is lack of documentation. And sometimes you need to meet great complex flows described in freestyle or in just batch scripts to pipeline. And you need to somehow deal with the current environment in order to deploy pipelines efficiently. And there are many approaches. One straightforward approach is to just set up a Jenkins server, set up pipeline libraries there and just debug everything on the Jenkins instance. But the problem is that if you want to develop pipeline, you usually get into issues with, for example, script security or just with syntax mistakes. So effectively, you end up with doing many cycles like modifying your code, then committing, then running the test, then the test fails. You just start from the beginning and modify, commit again, and it takes much time. The approach I would like to achieve is to avoid spending much time on these iterations and to just be able to stop everything locally. And in order to achieve that, I have set up an instance. So here's the kind of schema for this instance. I use configuration as code not only for job definitions, but also for infrastructure as code. So in my case, I use Docker in order to spin Jenkins development instances and also production instances so I can easily set up instance. And in order to be able to work locally from my IDE, I use several hacks. For example, I use file system, ACM plugin in order to connect Jenkins to local directory mounted via volume in Docker. And it acts as a kind of local repository located on my machine. So effectively, the instance I get, I have code locally, but this code gets propagated to my instance via volume and hence I can edit it and it gets automatically reflected in IntelliJ IDE. So and of course in Jenkins. So this is how it works. It sounds simple, but unfortunately, there was no way to do it by standard tools. And I propose file system ACM patch for that. I will show it later, but now I'm going to show you how this instance works. So I'm going to present the ready demo. You can find it on Docker Hub with all the documentation. And I'm just going to go through the configuration part and then through the behavior part of pipelines. Actually, that's always my slides. Okay, let's go to IDE. I have several projects open and one of the projects is Jenkins configs code example. Effectively, this is the instance I was describing. This is an instance which has been fully configured via Docker file and via system Groovy scripts. So let's take a look at the Groovy file, at the Docker file. So effectively I start from Jenkins LTS. It's a standard image provided by Jenkins project. I apply several hugs to configure update center because I need experimental version of my file system ACM plugin. The next version it will be available out of the box. But after that, I just install plugins, install the environment and get running the basic instance. How do I configure it? In Jenkins, there is an opportunity to stop boot hook scripts and define them by Groovy. Effectively, if you go to Jenkins, you may find one page documentation for these Groovy scripts. But actually it's a kind of powerful engine which allows configuring almost everything in Jenkins runtime. So the idea that once Jenkins starts, it firstly loads all plugins configurations and then invokes triggers and it goes through all Groovy scripts and invokes them one by one. So for example, there is a pretty complex Groovy script which initializes my authentication engine. So what does it do? I use Hudson private security room and I register several users. For example, there is a user user and also user admin. But you may notice that there is if condition. So for example, for them, my purpose is I use admin, but on production, there is no admin user in the system at all. So there are only users who have read and write permissions to jobs, but no admin permissions. And then in this script, I configure a role strategy plugin in order to set up security. So this configuration may be quite complex, but effectively just uses binary APIs of Jenkins. And I configure roles using ownership based security. So instead of having hundreds of roles, I just set up roles for owners, co-owners, and then involve them. So Oleg, if I can come in for a second here, there was a question of what's the difference between pipeline as code, infrastructure as code, and configuration as code? You kind of passed up. OK, so it's a kind of philosophical question. So when I say configuration as code, I mean that it's a combination of infrastructures code. So your system configured this code. And also configuration of jobs, which is called pipeline as code in Jenkins project. So I agree that configuration as code term is maybe quite confusing. But yeah, I use it just as a combination of infrastructure as code and config as code. All right, thank you. OK, so here's my instance. And you may see that there are other files. For example, I configure Docker in the same way. So I install, I use yet another Docker plugin. Here's the configuration. You may see that this code is almost declarative, but actually it's just a feature of Groovy language. And effectively, it's a fully valid Groovy code constructed from Java APIs. And moreover, since I use IDEA, I can define a virtual Pomex ML. These dependencies on all plugins I use. And I get a kind of visual Jenkins plugin, which I can verify. So I can verify that all this code is syntaxally correct. I can run static analysis. I can even debug it if you want. I can show it later. And yes, it's a kind of valid scripts. If we talk about pipelines. So here's an example. I have a kind of local pipeline development library. And this library just checks existence of a particular directory on the Docker image. So it's a mount. And we just take it from external source. And then if we find it, we set up a new folder, pipeline route library. And there we define a library configuration, which takes definition from file system as a plugin. So from this location. And then we just enable this library by default. After that, we go through additional directories and initialize extra libraries I may want to use in my project. So for example, if I develop a project with 10 pipeline libraries in parallel, I can just get snapshots of all of them in my environment. And I do not need to commit all 10 ones in parallel if I modify something. And after that, I just stop several reference jobs. So it's how it works under the hood. Let me show you how it works in a demo. So here's a configuration of plugins I install. Since I use Docker, I can just run the build of the image. Build of this image for the first time will take lots of time for sure. But since everything is cached, I just launch it. And that's it. I think that I install plugins once and only once when I built the image. So when I run the image, I do not need to configure anything else. So here's my command line. So what do I do here? I just start the image. I pass Maven report just to have a Maven cache. Actually, I pass it to agents as well. And then I specify volumes in order to pass pipeline library and extra pipeline libraries which I may want to use. And that's it. I also use Dev host, just a callback for Docker in order to connect to the instance. There is also HTTP port and a general P port. But actually, it's just a startup of Jenkins instance. So I've clicked the start command. And now, if everything is fine, yes, Jenkins starts initialization. So firstly, it passes through common initialization steps. And then it reaches the stay when it needs to invoke Groovy boothook scripts. Seems it takes a while now. Let's see. OK, you may see that there are customer log messages that something is being loaded. And you may see that, for example, there is initialization of development folder I've shown you. It adds pipeline library. And also, it adds two additional libraries it was able to discover on my file system. And the instance is being initialized automatically. There are also some tweaks. For example, I configure security from this instance. I create locale just to enforce English on my setup. I set up tools. So on this instance, I am going to demo Jenkins IOS pipeline library. So effectively, in Jenkins, you can build plugins using single line built scripts. And I'm just going to demo it in the setup. I need some tools in order to make it running. And they also get configured via my image. So Jenkins is up and running. Let's go to the web UI. OK, I'm going to log in as admin just to be able to show you something. So here's my image. Everything has been initialized automatically by configuration as code. There are several folders. Today, I'm going to show you only the development folder. And there is pipeline library folder I mentioned. So there are several jobs initialized by the script. If we go to the configuration page, we may see that there is configuration of pipeline library which uses file system SCM, which pulls from master. It's a default version. Obviously, it doesn't really matter for file system SCM, but we just comply with the web UI. And that's it. So when I launch a whatever job in this instance, for example, there is a build of job restrictions plugin. Let me show you configuration of this job. It just involves build plugin. So it's a script in Jenkins which performs all build steps. And I think we use Jenkins on Jenkins. So if we go inside, it's a pretty complex script. So here's a build plugin. Effectively, it's not even a method. It's a global variable. It performs builds of plugins on several platforms like Linux, Windows on particle, JDK versions. And it has additional options which enable, for example, find bugs, check style, et cetera. And everything is configured in the script. But as a plugin developer, I just need to set the one line Jenkins file in my repository and everything builds automatically. So here's this Jenkins file. You may see that I take it locally not from repository just because I didn't have a Windows label on my instance. But everything else happens for the local build. So we can just start the build. OK. So it started execution of the project. It has checked out a pipeline library from file system SM, and now it executes it. It will take a while to complete the build. It takes several minutes, usually. But the advantage of Jenkins is that once we perform to check out, we already have cached pipeline library. So we can go to the pipeline library definition and, for example, break something. So let's assume we develop pipeline and let's just add something like exit one here. I don't even care if this syntax is valid or not because, yeah, pipeline should just fail. OK. I click build plugin. And here's a failure of the second build. What does it say? Yeah, no such method to exit. So I just modified the thing. This container with Jenkins runs in Docker. I use Docker for max, so effectively it's even located on the remote virtual machine. But for the developer, it happens transparently. Moreover, I can do it not only on the library level, but also on the job level. So I have another job called Apache HTTP client API plugin. Actually, it's one of the new plugins. And in this plugin, I have Jenkins file, which is located locally. And in this job configuration, this job is also located in file system SCM. So I check out not only the pipeline library, but also the Jenkins file itself. And if I launch it here, it also starts executing. And hopefully it fails. Yeah, so it's also a local execution. OK, it fails. So also no such method. I've just broken the build definition. If I fix it, the build will pass. So effectively, it demos how to configure the instance. So let's go back to our pipeline library. So this build has successfully passed. We can see test results. We can see find bugs. Everything happens from pipeline library. But as I said, now I can develop the things locally. And if I need to define multiple libraries, I can also do it via the system configuration. Because the only thing I need in my configuration is code setup, is to specify additional sources. So here I just go. Yes? So I was wondering, this is really interesting. Are you actually giving recommendations for how you test the libraries themselves? Is that what you're sort of doing here? Or are you talking about the plugin? Yeah, so this is rather about development. I just develop this stuff. If we talk about testing, I would definitely recommend frameworks like a pipeline unit. Unfortunately, we don't have a demo today. But it's something I use in my production libraries. It cannot be well combined with the following approach. Because let's take a pipeline library. Unfortunately, in general, we don't have tests so far. But hopefully, I will create a pull request for that soon. So we have a CRC folder. But if we wanted to test this pipeline unit, we could add tests there. And then we could have a single repository with tests and plugin. And then using file system as a plugin we could be able to launch tests directly from Jenkins also on the local instance. Great. Even with this pipeline unit, we can combine that. Nice. OK. OK. Any other questions? I guess no. Actually, there is one more, which is what somebody wanted to get started with this. Do you have a link to the resources? Yeah. So this is just a proof-of-concept approach. You can find all the demos here. So there is a demo on Docker Hub, which provides just a running instance. You can start and modify. If you're interested, there is a repository in GitHub. So you can just fork it. And then you can modify it. If you're interested in more advanced demos, you can go, for example, to GitHub LibreCoreci. So it's a kind of work-in-progress project which also uses the same configuration. You may notice that there are some changes in the configuration of Groovy scripts. So instead of using all scripts on the top level, I have a Groovy boot script trap. Effectively, this thing which implements Groovy class loader and error handler. So here I can use Groovy classes and other advanced things to configure my instance. It simplifies scripts a lot. For example, I've already presented the authentication matrix initialization. But here I just use ownership-based security helper in order to define the things. So it simplifies the configuration. And for particular things like Docker, it becomes even more fancy. So for example, I moved Docker Cloud Templatizer to a library. And then I just create something from a template. Then I define extra options I need and get multiple configurations for images in a short sweep. And it's still a fully valid code which you can even debug from your IDE by connecting to the Jenkins instance. Neat. All right. Well, let's make sure that the links to these resources are on the online meetup. And also I'll add them to the description of the video when we're done with the livecast. So then I'll add all the resources. Great, thanks. OK, so that's all from me. If you have any questions, just follow up on the RSC channel. Great. Thanks, Oleg. OK, thank you. All right. Next up, we have Michael Hederman. And he'll be doing a presentation on delivery pipelines with Jenkins. Exactly. Thank you very much. So my name is Michael Hederman. That's me. I'm DevOps consultant with Cloudbees. During the next 15 minutes, I'd like to give you some appetizers how to set up comprehensive holistic delivery pipelines. So we start from the very beginning, commit, push some changes, and expect those changes to be on our production systems later on in the cloud. So actually, don't be shocked. This overview summarizes the different steps which are actually mentioned in the quick demo. So we have a mix up of different concepts, DevOps concepts. And of course, we use a zoo of different tools to implement those concepts above all. Michael, could I ask you to maximize that window so people can read more of it? Sure. Maybe zoom in a little bit. Great, thank you. There we go. That's great. Good, Walter. Actually, the zoo of different tools is strongly underlined and baked with Jenkins, right? So we have a lot of different tools. And Jenkins is actually the foundation which integrates this overall ecosystem, integrating all different tools, for example, to inspect the code, to inspect binaries, to integrate this binary repository managers, and so on, right? So those are the tools. And of course, above all, Jenkins here on the left side. I hope you see it. So back to the concepts, the idea is to start this continuous build, right? To give some quick feedback, that's the sub-pipeline on the very top. It just checks out code and runs unit tests, right? So the next one is a more holistic one, the green background. It continuously delivers specific depth versions, right? So major parts are, for example, deriving a release version from our Maven snapshots, which you can see here, or integrating and provisioning some target environments and all that stuff, right? So you'll see a lot of different tools are integrated. Actually, those pipelines are derived from real-world success stories. Then after we have created defined versions, we can cherry pick. So we, in the sense of busy developers or domain experts, we can cherry pick available versions and promote those versions to be release candidates, RC, right? And afterwards, you can also decide to cherry pick and promote RCs to GA, general availability versions, right? So those versions are supposed to be deployed and promoted to our production system, which is located in the cloud. So this is actually a very quick summary of our overall ecosystem. Now let's try to move this away. And let's go to the Jenkins dashboard. I will maximize it very soon. But before, I would like to open my favorite IDE, right? So we can use text bit, of course, or IntelliJ IDE, in this case. Let's zoom in. And so the main motivation, the main change we want to promote and stage toward production is actually this change, right? So we like to change this string to, for example, set one. And because we are very textbook-like developers, we also align the test cases. And because it was a very long day, right? So here in Germany, we already have almost night. We are not so concentrated. We try to set up and align this test case. And we also have to surflet. And we do some changes in the surflet. And now we think that actually those set of changes is sufficient to bring this change set to production, right? So because it's a long day, we ignored the best practice to test locally, right? And Jenkins, as our handy and very sophisticated automation engine, is our single point of truth, right? It delivers a couple of quality gates and detects any flaws and test failures. So actually, let's find a reasonable commit message. Change me to another string. You know, I get the point. It's not so important here. And then we can directly push this change set to GitHub, in this case, right? So this is our simple example of our business application. So now let's go back to the web browser. First of all, let's quickly look on our business application, which is also here in the cloud, right? So this one still provides this entry page. So the old one, which is supposed to be replaced by the new starting entry landing page. So having said that, we can directly, again, go back to our dashboard. And here you see that we have obviously derived a couple of different delivery pipelines from our slide, right? So here you see the dev version in the middle. And we see the continuous build on the top and a couple of other convenience builds here listed below. So we can now see that the project continuous build detected the test failure, obviously, right? So something happened. This project is just set up to deliver a fast feedback. And obviously, Michael, would you zoom in a little bit? Oh, again. I'm sorry. That's fine. Do you want me to start again from the very beginning? No. So I hope that's better. Yes, great. Thank you. OK. Good point. Thank you. So obviously, we have one test failure here in our Boro test coverage, right? So we can zoom in and actually check what happened. We have the Blue Ocean layout and user interface and also the information. That's the point here, right? About the test failures. So we have some more information always at our fingertips. And we see that we have to obviously fix the test case. So that's a good point to directly switch back to the IDE and let's fix. It's clear that we have a test failure and we want to fix it. So now let's bring it back to GitHub to push it right. I hope is that this quick fix will address the test failure. Right? So now let's go back to our dashboard. And so again, usually we have a GitHub hook, right? Which contacts our Jenkins installation to notify that a change occurred, right? So let's see. We can also trigger the build manually and quickly switch to the Blue Ocean and see that now it's obviously passing, right? So the test passed now, right? So as you remember what I said at the very beginning, those are very quick advertisers. So you should definitely, if you did not do it until now, give the Open Blue Ocean, for example, a try, right? The Blue Ocean set of plugins. Now let's navigate back to the Jenkins dashboard and we see that we have glued together the pipelines. So the continuous build and also the pipeline for delivering depth versions. So we can move into this one. And we see that this pipeline is already on the run and we see that it was not a lie on the slide at the very beginning. All those steps are really processed, right? So we do some database migrations, integration tests, utilizing Chef, Puppet, and many more things. And obviously, we also have a quality gate which inspects the source code and detects any design flaws. So this is the KSC, obviously. That's a good idea now to quickly move to the dedicated application called ZonaCube. Let me enlarge it a little bit. So here we can get even more information. You see, right? You get the points. You can integrate it to the ecosystem and you always have all information at your fingertips. And ZonaCube, the dedicated tool for inspecting source code, just shows and delivers even more information about the design flaws, what was detected, right? So we can zoom in and navigate to the class. And now we see that, obviously, there are some really bad practices according to the defined set of rules. So this is a good reason to quickly, again, switch back to our IDE. Let's change the size. And now we should move this line or comment it out and, again, bring this on its way. And you see that also ZonaCube has a plug-in for intelligent ideas, so you should use it as well. Let's push it. And so ZonaCube is not used anymore. We can close the window. Going back to the Jenkins dashboard. And now, again, trigger the pipeline, which is comprised by different sub-pipelines, right? So the time, the very few seconds or more or less one minute this pipeline runs. We can quickly go into the underlying definition, right? So this actually is the pipeline where we have defined different stages, the scripted pipeline, in this case, actually doing some setup work. I just want to give you some teasers and appetizers. I check out the code. Do some setup stuff. We set Docker, trigger the unit test. You know, that pretty much maps to the stages, which were described on the slide at the very beginning already, doing some integration tests. We just, that's also a good practice, right? We just want to build and package the war file, in this case, only once, so it will be reserved, actually, the database migration, this is an acute quality gate. You saw that already. Now we want to distribute the war file, right? So we could also store the ad effects in Jenkins, but often it's even better to store those binaries in dedicated binary repository managers, the war file, and of course also the Docker image, right? So because the war file is packaged along with some middleware, atomic 7, to our Docker image, and the Docker image is as well pushed to a factory. And all this is on GitHub, right? So I give you the link at the very end. So that's actually the pipeline, the underlying Huvie-based pipeline for delivering of depth versions. And what we see now is that, obviously, I hopefully do not forget it again to change the size a little bit. I'm so thrilled about this demo that I always forget to maximize the size. So now we see that, obviously, the last run did pass successfully all different quality gates, right? Also, including promotion of the binaries, the war file, and also the Docker image, right? To a factory or those nexus, if you like, and also some actual ports integrated, right? So Jenkins is really the Swiss Army Knife, which integrates this complete ecosystem and a lot of tools. So that's the sub pipeline for delivering the depth versions. And now we can quickly, I think, I still have, I had 15 hours right of time, hopefully, or was it 15 minutes? I'll mix it up. So I have to hurry up. We can now proceed and actually cherry pick the defined version and try to derive a release candidate version, right? So we do that directly in the version, and we trigger it. And you see a list of available versions, right? So as you remember, we have actually released the version 1.00, which is now placed in a factory, in our case, in the cloud. And this entry is listed here, and we can now promote this one to be a release candidate. What does it mean? We just add some more contact information and we promote the binaries, the war file, and Docker image to dedicated logical repositories inside a factory, right? So you see, we have some preparation, labeling, adding contact information, and promoting the Docker image and also the war file. So that's really important that we always process and operate on the packaged binaries, which were packaged at the very beginning, right? So that's the RC build. And we can now go back to the Jenkins dashboard and zoom in to that one. And let's try to now promote the RC build to be a GA build, right, after some more testing and functional testing and providing this version maybe to some dedicated test environments. So you'll see, we now want to actually promote those binaries from a factory. So for example, this one, we have an Alpine Tomcat-based image. And the GA build is located here on JFrog Bin Tray. You can also use, of course, for example, Amazon, AWS, for example, to host your Docker images, right? So here we have now the GA build, the product. What you also see is that I really much like it actually. And so our product is made up and does contain a generic part. That's the warfile. And just let me go into the Docker image. So we have Docker image for years. So this one was pushed to this public Docker registry some seconds ago, right? So we have some information here, a label. Micah, that's me. So it's really existing. OK, so that's the GA build. And now to close and finalize this, we can trigger the deploy and the deploy. You know, it's not some sort of 2014, right? So nowadays it's more like moving around some random Docker containers. We really want to set up some services and stacks and all that stuff, right? So all that's what is offered and provided by Amazon, for example, or also by Oracle Cloud, for example. So let's again open Blue Ocean and trigger it. Again, we want to shell pick and to promote exactly this version. And you can go into the pipeline and you see that those stages actually uses the API underlying tools to manage and to promote those Docker images to the cloud, to stop the current deployment, maybe do some hot deployment and all that sophisticated stuff, right? So that's it. Micah, let me just jump in here while that's running. You've used a number of tools here. What do you think the minimum viable set of tools would be to build a solid delivery pipeline with Jenkins? That's a good question. The minimum number would be one, I think. So from my long experience, it's really necessary. It's a very feature-rich automation engine, so in our case Jenkins. So I would expect at least having one tool, right? One to N. So particularly in very large projects, you will already have a lot of tools you want to integrate, right? So also, for example, the deployment to the cloud or to a private cloud, right? You may also want to double-check and to ask the user if he really wants to go the last mile, right? So we'll come to that. And now the promotion is active, actually. And the Docker image was pulled from Bintray and pushed to Oracle Cloud. And the container and the service were created. So a couple of minutes ago before the demo, I have upgraded my Jenkins installation to the latest one. And I'm very happy that it's so stable again, that nothing happened. And now crossing fingers that also the last final part succeeds, right? So we now expect a different application of Docker image here in the cloud. And that's the case, right? So hello, Jenkins. Now it would be a good time to clap your hands, but I cannot hear you. So actually, that closes my very quick round trip, right? I think it took about 15 minutes, more or less. You took a few minutes because Oleg went a little faster. So that was good, though. I had another question, actually. Are many of these tools, such as Artifactory and SonorCube, outside of the Java ecosystem? I mean, you're doing Java here. But you're using one of these tools. Are they useful elsewhere? Absolutely, absolutely. So in our case, the sample is strongly based on a Java EE application, right, which is shipped via Docker container. But usually it's not just about Java. It's more heterogeneous zoo of different scripting tools, languages, platforms, right? And if you talk about what you mentioned, that was a very good point. For example, Artifactory, it's able to take care of all different artifact types, right? So not only Java and Docker images to serve as a Docker registry, but also to manage your RPMs, for example, or Python packages and all that stuff, right? So that's really important. It was a very, very great point that you should take care of all different binaries not only Java to bring a release to production in a functional, consistent, and technically consistent degree of maturity. Cool. All right, thank you very much. That was really interesting to see a full pipeline from end to end like that. And next up, we'll have Thorsten Scherler doing a presentation called Pimp My Blue Ocean. But before he starts, I'm going to just remind people that we are taking questions on the IRC channel, the Jenkins IRC channel. And all right, Thorsten, take it away. All right, thanks, Tyler. So can you everybody hear me? I hope so. Tyler, scream if not. You're doing good. I can hear you. So we're talking about Pimp My Blue Ocean. So about my person, I'm one of the original Blue Ocean developer and now working within CloudBees and another team. But we're using actually Blue Ocean to deliver additional functionality and features. So what we're going to talk about, you actually can review and actually do it by yourself now in front of your computer if you want. So you can go to my repository in GitHub. I created JV17BOC. So with that, you're actually getting a basic setup of Jenkins plug-in, completely functional against Blue Ocean, current version of Blue Ocean. Further, I actually added some Docker file if you do not want to run the example NPM and stuff like that in your normal box. You can do that via Docker. And for community and for testing reason, we of course added a Jenkins file. So you have here the read me how you actually can do that. I will not explain that in detail rather than going now and dive into the presentation. So what we're going to do is we actually will create a custom component, right? And we will use our custom CSS. What you're actually right now seeing if you look carefully in the URL of my browser is actually a React Storybook. So on the right hand side, you see my IDE where the actual presentation is hosted. And if you can see, for example, I can change something and would save it and it will be updated directly right away. So I'm not lying. So what we would show you right now is we want to go and extend here the logo of Jenkins, right? So as you all know, Jenkins is actually based around extension points, right? So now let's dive into the typical plug-in atom and me of our plug-in or the normal Jenkins plugins for the front and let's say that. So basically what's very important is our two files are the Jenkins extension jammer, right? And our custom component that we are creating. So the index jelly are more traditional file, let's say that for classic Jenkins, right? And this Jenkins.js extension jammer is very important. So let's have a look here how that actually looks like, right? So what we're doing with this file is we're actually telling Blue Ocean to use a different extension than the one that is called as default configured, right? So how we actually found that out or how do you see where or which extensions exist is sometimes it's not very well documented, let's say that. But one, two good places are there. So first of all, it's in the dashboard, right? If you have a look in the Jenkins jelly extension jammer, right, there you see our default extensions, right? And you can always, it's always the same pattern. You see the extension point with have a unique ID for identifier, right? And then you actually store your component within the root or you get, for example, here you have example for a different path, right? It doesn't have to be in the same level as the Jenkins level. But it should be like relatively exibit, exibit, right? So let's see, for example, how actually that is implemented. If you go to the core plugin, right, there's a component, which is called content page header. And here we actually define our extension, right? Here what we're saying is, give me the extension of Jenkins Hegger logo, right? And I will pass as default implementation the blue logo with the props home, right? Which actually results in something like this. We have here the extension point, right? We have here the href for the blue ocean logo. And you can see this is a big SVG, right? Where the extension point or where the logo is passed through, right? So back to our presentation. So what we're doing here, like said, what we want to do is we override our logo, our Jenkins header logo, and our final result will look something like that, right? So I love my Jenkins. So how do we actually implement it there? So if we go to the corresponding component, we can see here that is a traditional React component, right? If you can see here that I can or I actually created a class and I created an icon or used an icon. And then I used my, and then I used the children, right? So if I go ahead and deploy that on my running Jenkins, it will become something like that, right? So for example, if I said I don't want to render this SVG at all, I can actually go ahead and get rid of that. I save it, right? So now I actually have to tell because if I refresh here, nothing will happen, right? Because I don't have deployed it yet. So let's do that. I actually use for everything, for every comment I use, a comment was two letters. So NB would be NPM run bundle. So if I do here NB, what I'm doing, it's actually telling Jenkins to actually deploy my new bundle. So if I now go here and refresh the whole thing, you would see that the Jenkins is gone, right? So you can see actually it is quite quick right now to actually here implement my work and then directly seeing the result here. If I go through the route of bundling all the time when I change my plugin, right? There's a certain drop, but I mean, the heart's really easy to see, but could you zoom in a little bit on that on the left on your browser? Cool, thanks. Sorry about that. No, it's fine, cool, great. So here if I now go and refresh again, I get back my Jenkins logo, right? So some points of heads up, like it always has to be, you have always have to have a default export for your component, right? So the extension jumble always needs a default implementation. Otherwise it actually will complain and it will not work, right? So let's go back to our presentation. This is what we're seeing right now. So the second thing is if you may have noticed we have our default blue, right? Here and there, but if you compare it with our, here we actually have a different kind of blue. We made it a little bit more darker, let's say, right? So how did we do that? So if you see here, we have in the classic structure of our component, we have a main, less, and there we have an extension file, right? So if we use this extension less file, Jenkins will actually know that we want to extend the CSS that's given into it, right? And put it in our bundle. If you see it down here, I'm not sure whether you can see that good. And now I can make it actually bigger. It's, you can see here, it's actually picked up in our bundling script. You see here the less, and you see proof of us and completed, right? And here we're actually using this file, right? And we're actually compiling it to CSS. So the interesting part here is like when we compare it with our result is like this is my logo. And we said we have a butter border bottom with two pixels that would be the red one here, right? And we have a background color of 47. For example, if I don't want to have that background color because I added that only do make React Storybook happy because we have here the same example, but this would be going into detail in the second part of my speech. So for to make that happen that I have this color, I actually use this background, right? So if I get rid of this color here and say I will do a bundling again and go back to my Jenkins because you can see here there's two colors, right? This one is darker than this one. And if we get rid of the color background here and refresh it here, you will see that this is now without any colors, right? So what I'm showing here is like the basic working with CSS, right? Better said less, which will then create it to my CSS that I want. I will go back to that version. So now what we're talking about is like, how did I actually came to the way to actually have my different color here in the header, right? What we're doing here actually is we set the basic header default in color, right? It's a four, it's seven, right? So what we're doing here, if you go inspect this page, you see that here this color, right? It's the same as this one. So if I get rid of that, we have our original blue, right? So this is actually a trick or a hook because of deep knowledge of blue ocean, right? Because the problem I had with our solution here is that we don't define a good variable. For example, like we defined here the primary color, right? To say like, which is our primary color and then it would be picked up by blue ocean. However, there is a ticket open in Jenkins 4.4.4.6.6 and there is actually described this problem and how to actually add the theme support to actually easily or easily even extended them now. For me, for example, would be a dream to have an extension point for CSS and then I extend that with my variables the color that I wanted for the different extension points. I think it's now would be time, yeah. Is there any questions or are there any questions? Yeah, actually there is one. Sorry, I was muted again there for a second. What are the URLs that people can check out your code at? Because this is all really, really cool stuff. We just, it'd be cool to be able to sort of dig into it and play around with it some more. You can see it on the screen right now. Yes, where Jenkins word 17 BO seed. Okay. Like JV 17 BO seed. All right, I'll include that. Thank you. You're welcome. And so now you're gonna move on to your second point. Do you want to introduce that or? Sure, yeah. So this was my blue ocean and now we're gonna move on to a more staid and regularly titled presentation from Thorsten as well delivering blue ocean components at the speed of light. Yeah, exactly. So developing as I've shown you right now, it's actually quite quick, right? I don't have to actually do a full redeploy and I don't have to actually make and even install and stuff like that, right? So I actually just have to bundle and that bundling is relatively fixed quick, right? So why actually would I want to actually fast my development cycle even more? And that actually leads to the problem that you have or normally in a project have that you're working with a project manager, a product manager, a designer, a UX guy and who else actually wants to be informed about the process or the progress of the product, right? So the easiest way is if you don't use dogfooding or you use dogfooding but you don't actually deploy every PR to it, it's actually to show people in your work with an independent version of it, right? So what we're using in our team right now, it's React Storybook, right? This actually helps us to deliver components much quicker, right? And actually have a validation of those components actually on a really, really early age with the PM and the UX or designer guy, yeah? Because you can do the screenshot and actually my designer screened all the time when there was like a pixel off. So that was a good thing before it came to the product to have it fixed. So how do we actually do that? Oh, what do we use? So actually what is Storybook? Storybook, it allows you to browse a component library, view different state of each component and interactively develop and test, right? So here's a screenshot of the one that you're actually right now seeing, right? So as you can see, what I'm doing here is like I included some specification, right? So here it's actually my rendered view, right? It's our presentation, the one earlier, right? And I actually validated that I can see the main dip. So what we actually using Storybook for as well is to actually include acceptance criteria. So this one is a nice, it's, you can't see it really well, better. So here it's an animated gif to actually show you that you can actually add the component in different states, right? And with different data passed to it, so you can simulate very easily different edge cases that you might find or that somebody might find in your component, right? So the other thing is like, there are some nice shortcuts because for example, when I take all control, shift and F, I actually get my full menu back here and I can go on and for example, navigate through my stories. For example, here you can actually see I have two specifications, right? And we render it, we will come to that right now. Let's go back to our presentation. Yeah, so let's make this big again. So what I said, it's like what I've shown you before, it's like you've seen the green dots, right? This green dots, they're based on the Storybook specification add-on, right? That's a really nice add-on for Storybook which allows you to actually add a unit test into your Storybook component. It's like for someone that maybe not obviously see the advantage of it, it's like that you have or develop your component and within a preview of that component, you actually can add the acceptance test, right? This allows you to simulate all the acceptance criteria that somebody can actually give to you besides still being in the product, right? What we are not doing here is actually to check whether that is working what I'm doing in Jenkins, right? That would be the last integration test, let's say that. So the other nice thing is not only you can actually run this test of cycle, let's see a test actually, that you can run the test, right? Let's go to the hellos page. But you can actually, well, you can then later on run that as well from the comment line, right? So let's go to our her hello world example. Let me make my windows small. So let's make that small and I do like that sweet. So here we go. So for example, you see that my two tests right now are actually in sync, right? For example, we did small to small, two small ones, I said the accepted web diff hello, right, should be one and our expected, our logo, right, should be one as well. So for example, we have here a logo and we actually pass a child to it, right? And so it goes, I love my crazy, right? So let's correct that for example and put Jenkins in it, right, let's save it. And now we have here my Jenkins. It's not the SVG that we saw and that's obvious, but you have more or less the feeling now how it would actually look like in real Jenkins, right? And now for example, let's say we don't have this logo in it. What will happen if I here save it? As you can see here, I actually now have my specification broken, right? So what happens actually if I do here in my comment line for example, an NPM test, NPM run test, it would actually tell us or should tell us that we actually are in a failed state and that would actually break our build right now. So come on, so you can see here, I have my fail, right? And it says like say hello, unit name is our logo component, expected zero to be one, right? So here actually is the link to our model where it's going wrong, hello XP, right? It would be exactly this, right, 25. And now I can decide whether this is an error that I need to fix or whether I need to fix the test or the code, right? So here you see now it's back green again and as you can see what the only thing that I'm doing always is I say fear, control safe, right? And it instantly refreshes my browser, right? That is because React Storybook is based on a webpack server and then using you all to actually notify the component to actually refresh. You can actually see that if you go here for the network traffic, you can see frequently that there is a ping to our server, right? It's like we use something similar in Jenkins, the SSE plugin, right? Where we actually pinging our blue ocean or blue ocean get pinged from the backend to refresh, for example, active runs and stuff like that. So yeah, so as I said, it's like when I now run the same NT, right? Here in my terminal, it should now actually do a bit of success and you can see it actually failed and everybody and it passes, right? And it's actually as we suspected it should be, right? So the second thing is like if I now go ahead, go ahead and for example, extend here my logo, for example, if I create like, let's go back, I think the part that I'm doing right now is explaining how I actually doing the presentation. Right, exactly. This is actually how we actually activated our tests, right? That are based in Storybook in our package JSON, right? So we have a look here in our package JSON, right? You find this component and this, we are using just to actually for not only running our tests, but to as well to have an evaluation of our code coverage, right? How this is a ratio like how many tests actually are covering all our source code, right? And there's some nice, nice configuration, right? That you can actually use in just to change the outcome and the different processor. We are actually using JUnit in this case, right? And yeah, it's like we running the whole Maven test for example, we're using the Maven test and it's the same as I said, like the NT, it's NPM test, which would be exactly the same. So the only difference between the one Maven will run, right? And the one we're running with NPM test is that we're adding here a lint one. We're using the JS builder from Jenkins, right? As a infrastructure to allow it to not to add our, we have to implement all this is already done in JS builder, right? So one thing as an enabler is like one important thing that I didn't mention earlier is to actually tell Jenkins to run Maven, to run NPM on your NPM based plugin. You actually have to tell Maven that with a file and we're using the file flag Maven execute not, right? With that Maven actually knows because it will actually start a profile with the front end plugin and then do all the compilation with not an NPM, right? So it actually installs here for you the not, right? And an NPM and uses that one to create the different not modules and stuff like that. So coming back to our presentation, do we have, I actually have more time. I actually will spend it to explain a little bit the presentation. So actually when you maybe came in, you saw this are my slides, right? It's like we are actually in that one right now. So if you can see here are my slides. So what we're doing here is we creating an object, right? And then I actually add a complete area, right? And with this area, I just use one component and I said I want to render it with a slider, right? So the slider is actually the oral component that you actually what you see here, right? So if you see here, this is actually a drop down box, right? So what I did is I create or use the slider in source in JS slider, right? To actually activate some nice things. Like you don't, you couldn't see that right now because there's no video, but what we actually added was we have some key handler in our presentation. So I can go arrow with arrow, right? And arrow interact can actually go to the next slide and see that, right? And I can use the left arrow to go to my previous slide, right? So what I'm then doing is like I'm getting from the properties of the slides, right? And I create here, this is actually a GDL component, GDL Jenkins design language, right? That's a drop down box that I'm using here and I pass the slides that I want with the options. And here I actually render my current slide. So what this component simply does, it keeps on the state in which current slide I am, right? So each slide then actually gets here the information how it should be rendered. And since we have seen that before, I will mention it here again. It's like I only imitate here the overall blue ocean header, right? And then use markdown to actually render the content, right? So just so I understand, you're using blue ocean components to create your presentation. Exactly, exactly. This is all blue ocean. It's like everything, it's copy and pasted by blue ocean, like about a little bit, but yeah. So you can do all kinds of things with these components. Exactly. And you've shown us here the testing of those components and then also shown us those inactions. And this is how you develop parts of blue ocean then. Exactly, exactly. This is actually this you can actually could implement this whole component, right? In the Jenkins, in the Jenkins that I have shown you before, right? This one I actually could add here a new tab and point here the presentation of my plug-in power wire. And it would go into that mode and you would be able to exactly do what we do here without any frictions. So what you're shooting for here in the long run is to have the same kind of plug-in extension points that have made Jenkins great in the past and allowed users to create new things on top of Jenkins to have those also, that kind of thing, also in blue ocean. Exactly. So the aim is like in a second version, for example, I may actually use some real life rest endpoints, right? To render it, for example, a weather plug-in, right? There you can actually include that in your blue ocean-based developments, right? Because in the end, it's just extend an extension point. And as you said, the great point of Jenkins until now had always been the extension points, right? So we actually allowing that as well in the front and we aren't perfect yet, right? But we're close to getting there. So for example, one downside is not all the extension points allow a simple implementation. Normally, it's like they will use every extension point or will call every extension point that they find. If you have an area of an extension point implementations, all will be called, right? But that, not all the time, is that what you want. But these are things that we're known problematic, but we are underverged to make it very, very easy to add your nut-based development, right? On top of Jenkins in no time at all. Excellent, excellent. Thank you. With testing integrated, so yeah. Excellent. So, yeah. If there's some other question, you can always write me, share it, it's my name and you have seen my, I will show here the. And this blue ocean seed, the BO seed, that's also has your presentation in this code, right? Exactly, it's like this has everything and it's worked out of the box standalone. It's like you can see there and you can actually play around with it. You can change my presentation if you want. All right, thank you very much. Thank you, bye. All right, that was Kaphos and Scherler showing us how to do some really interesting extensions of blue ocean and where we're headed with those. Next up, we'll have Steven Donner talking about Mozilla's declarative and shared library setup. Hello, everyone. Thank you for the introduction. So, let me get started here. This, this. Yeah, the matrix. Yeah, the matrix there is really awesome. It's fun. I took a screenshot of someone else's earlier. So, yes, Mozilla's been using. Okay, I'll stop you right now and go ahead and tell you, just zoom in a bit. Oh, okay. Until you feel that it's just a little too large. That's the point that you should do it. Probably this. Yeah, sure. Okay, this is extra. Yeah. So, Mozilla's been using pipelines since December of 2016, but we've been using declarative after that. And I kind of want to share the challenges of our pre-pipeline configuration with Jenkins and how we ran our jobs and then demonstrate how the declarative version of pipeline and the pipeline model definition and the shared libraries got us to a much better place, at least for our team. So, previous, I don't have any jobs here, previous to shared pipeline or even previous to pipeline, but I will tell you that one of the key problems that we were trying to solve and that we had was that a lot of our config was in the Jenkins jobs specifically across many Jenkins instances, at least three. And access to those were limited mostly to our team, sometimes to some developers, sometimes to some operations folks, but there was a lack of transparency, sometimes even for our own team members. We used to basically clone jobs to create new jobs and then rename them and tweak some of the configs and some of the parameters. But in doing so, we would lose a lot of both the job history but we'd also lose, we would make typos and we would lose concurrency settings, we would forget to publish job results to email and to IRC, things like that. We also kept preaching to our developers and our ops people that config is code, ops knew that well, most of our developers at the time did not and yet we ourselves, because we didn't have pipeline, weren't really doing that ourselves. So when pipeline came along, we were able to abstract and put a lot of our config out of our config.xml for our jobs and into pipeline and do a lot of, not necessarily shared but consistent code across projects. The other problem we were having was the configuration changes had no audit trail even though we were using the job config history plugin, it was kind of a pain because it wasn't in SCM to see changes across many projects, across the team and across teams. And experimentation was difficult as well. So it was hard for us and for our developers and ops to change job configurations without either changing the core job itself that we relied on in production and staging or to create a new job and rename it to something like food.temp and do changes there and test them and yet still have all the alerts and everything set up correctly and working. So like I mentioned, we've been using pipeline since December of 2016. So I'd like to share what that original file looks like and I will make this one bigger here. It was of course a lot of inline groovy. Right here, you can see the capabilities. So these are what we send to Sauce Labs where we run our tests in the cloud. We write a lot of our capabilities for Selenium because we're a Python and Selenium shop. We write those into a JSON file and then we send that JSON file along with the capabilities above into a Sauce Labs where it runs the tests. Tox is our virtual Mth runner. It kind of sets up the environment, abstracts a lot of the things like the PyTest additional options and sets the environment and some things like the ASCII color plugin for the build wrapper. But you can see here, there's a lot of try, a lot of catch, a lot of wrap, wraps throw finally, a lot of inline classes. Not to say it was nasty because it was still a lot cleaner than putting everything into job configs. But it was a lot of hand rolled stuff and we had to multiply this across sometimes 20 projects. We've scaled down the number of projects just because we've thankfully had success moving a lot of them into development repos, which is great. But you can see here, the biggest one is probably the IRC notification plugin. This was all hand rolled because at the time the IRC build notifier and the messaging plugin did not yet have support for pipeline and declarative pipeline. So we had to do this all hand rolled, had to do this in stages. And then there's a giant try right here to make sure that all of the variables and all the PyTest add options were actually sent to talks and actually work. And if not, then we throw a failure. And finally we write out the results, we stash the environment so that the consumers of our builds upstream can be able to take them. But it's a lot of try catches. We're a Python shop, as I mentioned, largely because our development teams are. So we wanted to share expertise and help there. So a lot of the groovy stuff was unfamiliar to us for a lot of us coming from either PHP or something else. A lot of the Java syntax and having to match braces and kind of some of the callbacks and things were kind of confusing to us. So that's what we looked like. And this was for one project, 159 lines of code for one project. And as we moved into declarative, which you can see here, we've scaled that down by I think 60% and yet we've actually increased functionality. So we still have largely some of the same things, the capabilities we still mention and specify. But everything in the pipeline right here is very clean. There's no nesting where there doesn't really need to be. Everything is declarative. And we can still set overrides for some of the talks, things like tracebacks, color options, which driver we're using in the variables. But you can see here as well, we've got now stages so we can do parallel execution for nodes. We still write out the capabilities and excuse me. And on post, regardless of whether the build succeeds or fails rather than doing try, catch and throws, we just use the post step to always write out the results file with the J unit and active data and tree herder are Mozilla specific things. And on post as well for the second post, only on failure do we actually send to, well, sorry, on failure, we always send mail but on the change state. So if a build goes from success to fail, we send an IRC notification. And if it goes vice versa from fail to success, we send an IRC notification as well. And you can see those showing up here in pound FX test alerts. Neat. Steven, can you go back there for a second to your declarative and school just a little bit. So you have a post section here and that first post that we're seeing there, that's for the test stage. So if the test fails, if that fails, then you'll get, excuse me, it'll always archive the results, do the J units and so on and so forth. And the second post here is if any part, that will handle if any part of the pipeline as a whole fails, right? Yes, correct. Okay. And go ahead. Go ahead, sorry. I think it was you who actually helped us with that or maybe it was Robert, but yeah, we had a lot of issues with kind of the branching and figuring out where some of these calls should be in terms of scoping. So we appreciate the help from the project. Oh, totally. The other question I have is before this, before the declarative, you had a whole big long bunch of try catches and script, something that really looks like groovy script. And now you have a groovy syntax that you're sort of configuring steps in. Did moving to declarative make it so that non-release engineer type people, tools or release engineer type people could make changes to this? Or do you find you're still only having a few people touching this code? Oh, no, absolutely. The ability for our engineers, whether they are manual testers who have working with some developers or ops or even another test automation engineer, the ability for them to ramp up a new project, they literally will copy and paste this project, change the parameters for versions of Firefox that they need or platforms, change the timeout value for the bill duration or change maybe some of the concurrency options in here. So that it runs PyTest with like 10 browsers instead of five or something like that. Most of the time they don't need to change a whole lot except for the project specific things like the variables. So it's very, very, very clean. And when we set up a Jenkins job, we don't really need to clone anymore from a previous project to get the default configs. That's largely in the Jenkins file. And even things like deleting the workspace used to be a problem for us because sometimes we'd forget to specify that in the Jenkins job config itself. So now we've got everything standardized. We're moving to Docker as well to make yet another kind of needed level of abstraction so that developers will only need to run the Docker file and then they will get kind of all of this for free. But the core config is here and it's very easy to change. Excellent. I also, I guess one other question. So you were saying that this is all standardized now basically for your team and you're gonna use a Docker container that'll hold this. If you could put more of the declarative into the shared library, that would be cool too, right? Yes. I think we kind of, other than some of the fixes that we took to use declarative and the pipeline model definition leading up to the 1.2 release from A Bayer and from Robert, we really haven't come back and revisited this I think since probably February or March. So there's a lot of new features that are either polished or just weren't available for us or weren't kind of ready for us to consume. So there's a lot of cleanup we can still do. Right, well that's always the case, right? New features come out over time and then every six months or so you go back and you go there's this new set of things and you iterate on that to make them to come up to speed with those, right? Yep. And so I showed you the before kind of the before declarative before shared libraries and I showed you the after what I didn't show you was the shared libraries themselves and so I don't think I'll go through that exhaustively here just due to time but you can see here if you look at this and I think these Dave probably Dave Hunt, sorry, let me back up just a bit to show you this and I'll include this link later but Dave Hunt is on my team. He's a test automation engineer, senior test automation engineer and he was the one who largely spearheaded and kind of took the pipeline code and worked it with our Jenkins instances and with our projects and I shadowed him and adopted some of them but he really did the main work. Dave Hunt.co.uk and he's got a whole bunch of blog posts kind of chronicling even November actually dealing with the IRC notification plug-in all the way up to yeah, March basically. So. So I suppose I could, I would need to ask him maybe but how are you guys testing your shared libraries? I mean, I don't know if you were on the, if you're able to see all that's presentation that you just did, did you get any of what he was saying or I mean, look, what do you guys do? So yeah, testing is something that we, looking here we've got an open issue for one of our biggest ones, it's called Service Book. So Service Book is a website that we wrote in Flask where basically we put projects, the repos, centralized everything our test teams are doing so that ops and developers and managers can see where the results are stored all that kind of stuff. So we actually wrote a REST API inside of our Service Book that our shared library now calls so we need to document and test that. And just recently we replaced a whole bunch of hand-rolled HTTP get and post requests for getting things from the Service Book API endpoints into our code, into our tests. We've replaced that by using the pipeline enabled HTTP request plug-in and so we've got version 1.7 and 1.8 that now use that. So backing up though, yes, we still have a huge need for testing linting, for coverage reporting and tests. So Dave did write a few tests. One of these here is, this is the shared pipeline test for HTTP pipeline. Basically mocked out quite a bit but it does pull in the groovy and then has some expected URLs and gets some of the values there. But very rudimentary, I have to say because it's also in code and our developers are also now writing pipeline for their operations. If we mess up on anything, we can always fix it with the pull request or revert. So even though we've gotten to this point and we're pretty stable, the way we're moving forward is with the other teams so we kind of need to bring the tests back. So we'll be taking a look at the other parts of the project, specifically things like testing your code locally because we used to do that, testing the pipeline code locally because we used to do that inside of the groovy script inside of the pipeline editor. So now we do that largely with our own Jenkins instances committing to SCM, pulling down the Jenkins file and running the test that way. Those are local Jenkins instances. Yes, local Jenkins instances. Similar to what Oleg was doing. Yeah, okay. So you were talking about declarative 1.2 that just came out that was a Bayer and others worked on. So I noticed that much of your pipelines are pretty linear at this point. Are you guys looking at other ways to speed things up like adding parallel stages, that kind of thing? Yeah, that's one of the big things that we're looking forward to is the parallel stages. In addition to the post build steps that we got from declarative, I would say the parallel stuff is probably going to give us the most win. Our tests are pretty quick right now and I think having Docker and cached images will help as well. Since I mentioned we still clear out the whole workspace. Docker will help a little bit with some of the caching and that, but yeah, we have a lot of executors on hand and so we want to be more efficient in using those inside of our Jenkins instances. Cool. So I can show you how long do your builds usually take right now? Well, here's some of the durations here. These are, we call these ad hoc, but they're basically a nice way for us to run either configuration changes for remote teams from Jenkins files or from the project itself. Most of them you can see I can sort here. If you run them by themselves, the longest running one is 11 minutes. This is a pretty beefy. I think it's, I'm not sure if it's an M3 medium or maybe an M4 instance in Amazon and we run CloudBees, AMIs, but yeah, we haven't really taken advantage of the parallelization yet. So I expect that this would be at least twice as fast, if not three times faster. The way we write our Selenium tests, they are atomic, so they do not depend on any other test data. Everything is self-contained. So running them in parallel will not, you know, affect any other of the tests in a given test suite. So basically as many browsers as we can throw at Sauce Labs and as many Jenkins executors in the nodes as we can have, we'll do that. So we're probably looking at a couple minutes, maybe five minutes tops per build max. Nice, very nice. So yeah, the shared library gives us the notifications. It gives us the post build steps. It gives us a lot of the credentials, the environment variables, all of the pipeline specific options like making sure every project has timestamps, has the same timeout, maybe different values but has a build timeout. It uses the ANSI color for X term. We use that because that helps us with our Python stack traces. And the shared library as well is just very, very clean. Every project submits to the same endpoints for S3 buckets and that's all configurable for dev stage and production. We submit to other Mozilla specific things like active data, which are basically just parsing the raw results and graphing our failures and passes over time. So we've got not big data, but pretty close to it running to show us where our failures are in terms of the suites and inside of the tests and then specific endpoints as well. And then tree herder is one of the first things that Dave Hunt hooked up. And tree herder is important to us because previously these were just web automation tests and the Firefox client build team and developers didn't really care about them. But as we moved more into Marionette backed web driver and we started writing tests for Chrome, not Google Chrome, but the browser Chrome, we started needing and wanting to send results for client builds as well as web automation builds to the same place so we can kind of get a overall health of our test ecosystem. And yeah, the service book one is the one that I mentioned, that's the one that we've implemented just previously. And now we literally have a Jenkins file for projects that use service book that is three lines. So we've cut things down by quite a bit. So I assume that over time, we will just keep pairing up and tidying the Jenkins files and moving more things into shared library where they fit and increasing our build speeds and our reliability has gotten a lot better. So there are no config errors that don't come out of either poor reviews or messed up check-ins or something like that rather than someone hitting the wrong button in Jenkins haphazardly. That's really great. How are you creating this documentation for what's in your shared libraries? Every time we add a new part of the shared library, we just do a pull request for that. So you'll see here, sometimes I or Dave will forget to do a release note as we cut a new release in the GitHub branch. So we'll come back and fix that. But basically with every release, we put into the readme, we put not only the version steps in the version history, but the dependencies of the new plugins that we add. Like here, we added the HTTP request plugin. As I mentioned here, we forgot to update the core documentation. So we did that, I believe should have done that here. And is this where like new developers for Mozilla would come to learn about what steps they could do, what they can do in Jenkins, or do you have another page for that? No, right now the readme for the FX test Jenkins pipeline is the canonical truth. And sometimes we're a little slow to update that, but we're actually going to move this to read the docs so it has better search engine capabilities and is also still in SCM. But this is the source of truth. And we make sure to add real world examples from our own usage and encourage people to ask and to extend it if they need it as well. I'm sorry, you said, what was that, read the docs? Read the docs, yeah. Okay. So we have an issue open to move all of the readme for our shared library to read the docs. And then it will be even a smaller library in terms of just looking at this. But all the examples we'll be able to pull in from that. Excellent. All right. So I think that's largely it. I can run a build really quickly. But yeah, that's it. Are we out of time? Is there anything? No. Yeah. Our presentation's pretty short. Sorry, I will post links. I have everything here in a Google doc. I will put it in a snazier format and be sure to share links back to the Meetup page and also probably post them in a nicer format into the Pound Jenkins IRC channel. But sure, that's good. What I was there, the question is actually a little bit of a different thing, which was, have you looked at Groovy docs? Like there's Java doc format. And have you looked at Groovy doc or other things like that? Ah, like Doxygen, like self-generating documentation from code? Yeah, exactly, yeah. Yes, David has looked at that. It comes down to a time thing. There's a lot he'd like to do, but yes, that is something that we would very much like to do as we go forward. So we could actually use help with this. If anyone finds, you know, if you're Selenium and Python or even if you're not, but if you find some of this abstraction helpful, if you would like to tackle some of the open issues, please feel free to welcome contributors, so. Yeah, the opening. Sorry, you cut off a bit there, but. You cut off a bit there, but. Sorry, I waited for the build. Okay. Yep. And I should mention too, since going to declarative pipeline, some of our newer projects that have Docker and a run file and Jenkins file pipeline are callable by ops themselves. We use Jenkins Enterprise. So they have the ability to set a config in their Groovy to run our tasks remotely, basically take the results of the exit code, exit zero for success, one for failure, and then determine in their own pipeline and blue ocean step, whether to deploy to the staging instance, deploy to production or whether they need to take a closer look at our results. So we are, we have both the flexibility to, you know, change our tests and our configs, but also to be integrated into the DevOps pipeline from Dev and from operations, which is the whole point really of going config as code for us. Excellent. All right. Thank you, Steven. Thank you. Moving right along. I'll remind people that they can ask questions on the IRC channel. And the simple heaven doing so, so definitely do that. We're watching that channel and asking and passing those questions along. And next up we'll have Mark Wait giving us some tips and tricks. Hi, I'm Mark. Let's go. Yeah, there you go. Do you see it? Yes, I see it now. Thank you very much. Hi everybody, I'm Mark Wait. I maintain the Git plug in and this is some ideas, concepts around dealing with things that we may have made mistakes in. For instance, sometimes you discover that people are checking in large binaries into your Git repository those large binaries live forever and that Git repository gets larger and larger and larger. As you deal with large repositories they have unique problems that you have to deal with and you made a mistake. It really isn't healthy to have large repositories but you got to find a way to deal with it. So there are some key concepts that you can use to frame your efforts to decide how to help yourself best to live with a large Git repository. Some of the things that you can do can help you in dealing in reducing the load on your remote on the central repository. So there are things that can help there. Some of the things you can do can help you on the Jenkins master where it has certain things that it does like polling for you or caching repos for pipeline. There are other things you can do that will help you on Jenkins agents where the pipeline cache is where you've got the local repository copy and where you've got a workspace. Each of those three areas have different things which can help them and those different things may be applied in different ways in your environment. So it'll help you to understand which things apply where. So first up what can we do to reduce the load on that central server? The Git remote repository. The remote repository has all the history in it but it only has to send history that's requested. So one of the things we can do is find ways to ask for less history. The Git repository includes all the large files but we can find ways to only ask for a subset of the request, the large files. So some of the techniques available there is we can use a reference repository. What that does is that provides a local cache that can ease the load on the central repository. We can use a narrow ref spec. A ref spec is a concept that Git gives us to ask for less. We could use a shallow clone. That's a way of asking for less depth or we could enable large file support. Each of those techniques we'll discuss in a little more detail here to give you an orientation how any one of those four or all four together can help you reduce the load on your central repository. So a reference repository is a local copy of the remote repository and Git can reference that existing repository rather than downloading the reference data. So imagine if you will a spot on your file system where you put a copy of your repository and everybody else on that computer points to it instead of downloading it again for themselves. Big benefit, it reduces network data transfer. Another benefit, it can reduce how much local storage you need on that machine that's using it. Be warned that reference copies are not automatically updated. So as your history grows, that reference copy unless you're doing something special with it doesn't keep up. The other is that because everyone creates pointers to it if you destroy a reference repository you have damaged all the references. So don't destroy reference repositories. So go ahead and just ask you there so that the reference repository. Is the underlying image. So if you have a bunch of jobs on that same client they'll all use that local storage, that one copy and then saving you space. That's how the space is saved, right? I'm sorry, Mark, you are actually muted because we're getting some echo so you have to unmute yourself. I see, you muted me, sorry. I'll turn down my speakers. So a reference repository is a local copy of the history and so each of the jobs or each of the workspaces that are sitting on that disk can point to the history that's in that reference copy. With them pointing to that history they get disk space savings from that. Did that answer your question? Yes, I did. Great, thanks. Okay, so in addition to a reference repository you can narrow the breadth of the information that you request from that remote server. So what a ref spec is is a ref spec is a get way of saying what are the things on the remote server that I want to bring to the local? And that ref spec, good online documentation can tell you about how you can describe it what the rules are that govern which things you can do in a ref spec in which you cannot. If, for example, you only need one branch in your build a narrow ref spec can tell the remote server to only give you the history for exactly that one branch where the default would give you the history for all branches. You can reduce local repository storage you can reduce data transfer. However, there's a negative that because you asked for exactly that branch if you are doing inside your job comparisons between one branch and another if you didn't ask for the ref spec for that other branch it won't be there. The other challenge is that ref spec patterns are limited. You can't use general purpose wildcards. You can put a wildcard on the very end right after a slash but that's about it. There isn't any fractional pattern matching going on in a ref spec pattern. So ref specs let you limit the breadth of your question to the remote server. Reference repositories let you avoid bringing down data that you've already got. The next way that you can help is you could ask instead of limiting the breadth of the question you can limit the depth of the history that you retrieve. So a shallow clone can limit how many entries in history you'll bring back from that remote server. So if your job really cares about building the current thing and does no operations with history you can just ask for a shallow clone with depth one. It will reduce your local storage. It will reduce the data transfer and keep your job still running. However, there are downsides to shallow clone for instance, you can't merge shallow clone work because it may skip changes in bringing them down to you. And so it doesn't have a perfect representation of history. Change reports can be incomplete. So if you rely on reading the changelog of a build don't use shallow clone. The other is shallow clone is only available in command line git. So if you're using JGit you can't use shallow clone. So shallow clone gives us depth coverage, narrow ref specs control breadth and then there's one more tool in your arsenal to reduce the load on your remote server. Some git implementations have an extension which allow them to store large files outside the repository. That includes GitHub, that includes Bitbucket, includes Giddy, there are many that support Git LFS as a standard extension. It's a good way to do enterprise scale large file transport. LFS is high performance, it's very actively developed and it dramatically reduces the requirement for local repository storage on you. However, there's a downside to it. You have to install the LFS extension on every agent that will run Git LFS. It does require extra support from the hosting provider and at least in the earlier implementations there's no support for SSH. It has to use HTTPS. The Git plug-in itself also does not yet support sub modules in LFS and if you're working with the history of a large file you may have to invoke separate LFS commands. Even seeing all that it can dramatically reduce the data transfer because what LFS will do is it transfers the large files only for the current version instead of retaining the whole history of every large file inside the Git repository itself. Now, Mark, when you say the Git repository you're talking about your current one, the way that the large file support works is that it actually has another repository that it stores those in, right? That's correct, yeah, very good. You've understood it exactly. There's a separate area, if you will, on the remote server which hosts these large binary files and that separate area is kept safe. It's kept backed up. Git Hub's very good about it. Bitbucket, Giddy, all of them are taking very good care of your large files but they're not stored right in the Git object store. And effectively behind the scenes what's probably going on is some kind of shallow clone of those large files. Yeah, that's a good way to think of it conceptually. Give me the most recent copy. Okay. Great, so, go ahead. Go ahead, sorry. Okay, so now let's shift our focus away from helping your central server. What can we do to help the Jenkins master? Because there are things that will create load on the Jenkins master if you're dealing with these large files or these large Git repositories as well. So the master typically does pipeline scans of repositories. It also does polling and acts on notifications to check to see if there are changes in the remote repository in the central repo. So each of those things may require a copy of the repository on the master. Therefore, if you've got large repositories you can use reference repositories as described earlier or you can use large file support and they'll both actually help your master in addition to helping the remote. So two ways there even with best practices in Jenkins where you generally don't execute anything on the master node, you can help it if you've got reference repositories dealing with these large files. Now on the agent, the agent is a place where you can get large benefit, significant benefit by using these techniques. So the agent is the one that's responsible to populate the workspace by checkout and it's the one that builds the job, it does the work. So the key things that can help you there, narrow the ref spec so that you ask for less from the central repository and you get less stored on your local workspace. Use shallow clone to limit how much history you get when you don't need full history. Reference repositories can be a major savings on the agent. As Liam described earlier, if you have a repository which is used in hundreds of jobs, you can have those hundreds of jobs pointing their history towards a single copy on disk instead of having hundreds of copies of that exact same history. Large file support as well can be a big help for the agents because you don't carry around all of the history of all of your large binaries. There's one additional option that is agent specific and that's sparse checkout. What sparse checkout provides you is a way to say which exact directories of this Git repository should I check out? Let's say that you've got a large tree of files in Git in a single Git repository and you want to limit that you're only working on a small subset of them under a subtree. You can use sparse checkout to check out exactly that subtree. Now this doesn't help the remote. It doesn't help the master because this is purely an agent local operation to reduce work on the local agent. But it can help local agents in organizations where they have a broad tree but each individual job only needs to work on a narrow part of that tree. Any questions? Just a second Mark, let me just see if there were any. That's, Git can be really hiding there. I mean it makes sense to see people that like go there at least we are talking about what? But this is the kind of thing that definitely helps out with Git and also certainly your master and keeps GitHub from limiting your bandwidth, right? Correct, one of the things that, one of the, by reducing your data volume that you're asking, you can reduce the chances that your hosting provider thinks you're doing too much bandwidth usage or that you're doing a denial of service attack from your machine. Cool, excellent. Thank you very much. Thank you. And now we will move on to Kik Xenta who will give us some presentation on visual pipeline creation in Blue Ocean. Sure thing, so like Liam said, what I'm gonna talk about today is just kind of give an overview of some semi-realistic real world pipeline with the Blue Ocean Pipeline Editor. Just a spoiler, I'm cheating a little bit from Jenkins World because I'm showing you what's out in beta now. But we'll kind of go over this and see how things work. Just really quickly about me, I'm a Senior Software Engineer at CloudBees, Blue Ocean Core contributor. I've been working on the Pipeline Editor as well for some time, you can check things out at the GitHub there. And so I guess just to kind of kick things off, what we wanna talk about is, first of all, what do we wanna do with this particular pipeline? So to sort of simulate what a real world pipeline might look like, one of the things is it's gonna have multiple components. We're gonna have a Java backend sort of and a node based front end. We're gonna have multiple tests across a couple of different popular browsers, Chrome and Firefox. We're gonna look at having a QA step as well. So in a real world sort of situation, what happens is some sort of quality assurance gate is one of those things that they sort of give the sign off, okay, this is good or not based on some actual end user testing and then of course, some kind of a deploy mid step. And we're gonna use the visual editor to build this whole thing. So if we go ahead and just- I just wanna jump in here real quick. You don't have to rush, just- Yeah, yeah, no, I don't. Just go at your regular speed. You're like, go another, it's cool. Yep. So first things first, we're gonna start with an empty repository. And let's go ahead and just use get for the time being here. So I have an empty repository that exists here. I've already set it up to use this SSH key. So once I create this, I should be prompted to go ahead and make a Jenkins file and we'll see that here. So, you know, typically in the creation process, if Jenkins files are found, they're automatically scanned and that sort of thing. But in this case, none exists. And so that's what we're gonna go ahead and do today. So let's go and click the create pipeline button. And once we're there, you're basically presented with an empty pipeline. And so starting off, we don't actually have anything that's valid. You know, go ahead and try to save this right now. You should see, oh, there are some validation errors. So we need to do some things to fix this. So, you know, first things first, we've gotta add a stage. And we're gonna have a bunch of different stages, but the first stage I think we're gonna call server. So we're gonna build something on the server. And this is gonna need some sort of steps to it. So let's give it some kind of a step. So, let's see. And just to sort of show a few different things here, I'm gonna actually run a maven command. And so that's thing number one. Now to get the maven command to work, we, you know, basically what we need is some sort of a container to, you know, that has a maven in it. So we're gonna add a docker here. And we're gonna take the maven image and there's a particular one that I used before. So 3.5 GTK8 Slim. So we're gonna basically get maven 3.5 with GTK8. And that's the one that's just kind of small. I think it's a hundred megabytes or something. So let's see. And so after we do this build, we're sort of simulating a build in this particular step and I'll show some other stuff in a minute. But after we do this, what we wanna do is to stash the contents. And so what we're gonna do is store basically the waterfile that's a result. So we're gonna go ahead and just do the water. Okay, so that's thing number one. Keith, could I jump in here for just a second? Absolutely. That, if you click on the stage there, that settings part at the bottom, that's new, isn't it? Yes, like I said, I'm cheating just a little bit. This is out in beta today and we'll be out shortly in Blue Ocean proper. But, okay, cool. Yeah. But again, so this is sort of, you know, what to expect. So there we go with the server. Next thing we wanna do is we wanna build a client. And so let's see. I have a group. I'm gonna call this whole group build, right? And I want them to run in parallel. So I've just clicked, you know, the button here and it's basically created another parallel group. I've already added something that's gonna build the server. So I'm gonna add something that's gonna build the client in this case. So it's, again, we're gonna do a shell script. So I'm gonna paste something in here that I have. And you'll see something here as well. We've got an NPM call and this particular thing is gonna just install React from NPM. But similarly, what we wanna do here is, you know, go ahead and make something that's gonna make NPM available. So to do that, we're gonna use a different Docker image. And this one is, let's see, node six. And let's see, so that's okay for now. So let's go ahead and try to save this and hopefully it saves just fine and we'll see how that looks when we run it. All right, so let's see what we've got. We've got a couple of different things running in parallel. We've got a client build and a server build. And let's see, so what we see with the client build is we actually have an error. So this is kind of an interesting thing. So you'll see what happens is we sort of have an issue. We can't access this, you know, .NPM. And the reason for that is because we don't have the right user. So I'm running this sort of, you know, on some system it's creating a Docker image but it doesn't have the right user for the client. So what we need to actually do is go into the settings here and give it the args to run as the root user. Many of these Docker images require that and so you'll see that later. But if we give it a dash use zero, that'll give it the root user. So let's make sure that works and go ahead and run that as well. And you'll see as we watch this, it actually seems to be saving all of these things that we need. And if we look at the end, we actually have a successful build here. All right, so let's go ahead and keep adding some steps to make this, you know, a bit more realistic because right now it's building some things but certainly not doing anything with it. So similar to the server, what we want to do is go ahead and add a stash for the client so that we can stash a few different things as well. What we're gonna do is just save all of the things in the disk directory and we're gonna use those later but for the time being we're just gonna leave that. So after the build happens, we wanna go ahead and do some tests. Now in this case, let's see, first test, I think we said we wanted Chrome. And let's see, so we've got another Docker image we wanna set up for this guy and so we're gonna use standalone Chrome on this. We're gonna go ahead and add a step. In this case, we're gonna skip another some of these tests just for a brevity but go ahead and do that. We'll add another Firefox step. This is our testing group. So we'll do that. And then similarly, we're gonna run this in a different Docker container. Another Selenium there. All right, so again, let's go ahead and test this, make sure that things are good or we didn't mess anything up and save that and run it. So server build pass, of course. Client build is passing, doing some NPM installs. Other builds are not really doing much so of course they're passing as well but obviously what you'd wanna do is run all of your tests in parallel but again, sort of that's, we're trying to do some things for a brevity stake here. All right, so after we had all our tests passing, this is where it gets a little bit interesting. What we want to add next is a step, it's kind of a gateway, right? So in a real world scenario, you've got a QA team a lot of times anyway that their responsibility is to go and manually test things to make sure that, let's say your entire automation suite didn't miss anything obvious or they didn't see anything else that's broken. So what we wanna do is to make this happen in such a way that we actually have a QA step and QA is able to sort of approve things. So let's go ahead and just make a QA stage. Okay, the first things we wanna do here, because what we did back on the server and the client, if you remember, we stashed some various files and these of course would certainly be used with tests as well, but we stashed server files with a war and we stashed client files with the disk directory. So what we wanna do for the QA is to get all those files first. Let's go ahead and restore. We wanna restore the server and we wanna restore the client as well. And the next thing, we need to take those files and go deploy them somewhere. Now in this particular case, I think what we're gonna go ahead and do is use Tomcat server. So we're gonna put some script together that's gonna put those files in some place that Tomcat can deal with. So in this particular thing, we're gonna, if we just step back a moment, we'll use again a different Docker image here and this particular one is going to be Tomcat 8. Again, what we need to do is to run everything within the Jenkins environment as the root user because that's how the Tomcat server is set up. And the other thing we need to do is to expose a port so that we can actually see what's going on within the server. So Tomcat's gonna run under port 8080 and we're just gonna expose it to this local port here. You know, 11, 080. So going back to our steps, we have an application directory and this is where Tomcat web apps is gonna live. Now for those of you familiar with Tomcat, there's a special directory, root all uppercase. And that is basically the root directory so you don't need any context path or anything like that. So we're gonna just put our client files in there. We're gonna take our server files. Now, recall, we've unstashed something. In this case, it was a target server war. And we're gonna put that in its own server directory and so that'll be whatever web app that is. And we're going to make a brand new root directory that we're gonna dump our client files in. And what we did there was stash the disk directory. So we're gonna go ahead and copy everything from the disk directory just directly into the Tomcat root. And then last but not least with this container, we're gonna just start Tomcat. So there you see, we've got something going on. So let's go ahead and run this and see what happens. And it's gonna go through the majority of the things that you've seen already. But we're gonna have a new step here, a new stage at the end, it's gonna be the QA stage. So let's see what we've got so far. All right, so right now it looks like everything passed and that's good. So we got our scripts right, we got something Tomcat started. And based on all of our settings, we forwarded things to port 11,080. So let's see if we can access the server. Well, we can't obviously because it's not running anymore. It started Tomcat, destroyed the Docker image and everything's sort of back to normal. So, in this sort of situation where you've got a QA team that they sort of need to verify things before it moves on to the next stage, you can very easily use the input step to pause things and sort of get a yes or no answer. So what we're gonna do after this shell script here, and keep in mind what's happening within this stage is it's all running within this Docker container. So we're gonna put an input step in. And this one is gonna be for the QA people. We're gonna see if they need to, if they think things are okay or they're not. Okay, so let's see what happens now. Well, again, it's gonna go through a build here. And we see that we've got a QA stage that it's gonna get to eventually. So as soon as it gets there, you'll see that this input step has done something, something interesting in this scenario. Now the other thing that I'll say is, this isn't always what you will wanna do, but it's just sort of an illustration of what's possible with pipeline. So we'll see things went blue and somewhere or another here. We've got this wait for interactive input. We've got a go button that we added in an abort if things are bad. So what we can do, because we're paused within a stage, within a Docker container essentially, you'll see that I can go ahead and go and hit this site here. And I can go and test out that exact build. And so what's happened now is that with this pipeline, I've done a build with some kind of a client, some kind of a server. It's passed some tests that we've defined on browsers and it's made it to a QA stage. And this QA stage has the exact binaries that were built during the build step because what we've done is stashed them and restored them here. And so I can, at this point, let's say something is wrong and I don't wanna really just say hello here. So I'm gonna go ahead and abort that. But we'll add one more step here and you'll see, of course, I just terminated the build in that case. But we'll add one more step here that in the case that this was good, we wanna go ahead and deploy this to some production system. So we'll add another stage here after the QA stage. Similarly, we wanna restore the server and the client and we'll add a bit of a shell script here as well. And this is sort of a, you know, again, not one that's gonna deploy to anyone in particular because, you know, this is really a demo, but I'll sort of show you what it's deploying, what it's got. And then just for fun, we'll add another step that gives us a message. Now, this particular step doesn't really need to run on any particular Docker container necessarily. We could set it up to, you know, if we have a specific shell script that we want to use, if we wanna make sure that it's running on, you know, Ubuntu Linux or some specific environment, we could certainly, you know, set up a Docker image, but in this case, we don't really care too much. So let's go ahead and save this and, you know, things aren't gonna be a lot different from the last time. We'll see here, we don't have anything running in our QA Docker container. At the moment, we've gotta wait for a build to happen and we've gotta wait for the test to happen and that sort of thing as well. And just while it's going on, you can see, you know, I've already pulled these Docker containers, so this is going pretty quickly, but you know, if this was the first time, it would go ahead and do that. You'll see the shell scripts are giving us the things that we expect. We've got a Maven 3, you know, in this particular Maven container. We've got a node in this particular container, we had a successful install of React in this case. You know, again, things have passed our tests that we've defined and we're sitting on a QA stage. So going back here, you know, well, we certainly still have this thing running and you know, a QA person can decide, hey, maybe we wanna go and promote this to staging, so you know, that's what we can do from here or abort it. In the case we abort, let's just do that for now. Obviously, things failed and it didn't reach the deployment step in that case. But let's rerun that. And luckily we didn't have a very long test suite. So here we are again. In this case, you know, QA will go and look over here and say, okay, this is good. You know, whatever changes were introduced, I don't see any problems and I wanna promote that. So, you know, once they click the okay button basically, you know, the pipeline will go ahead and proceed and maybe, you know, in an actual real world scenario, you don't deploy immediately to production. Maybe there's a staging where you have a copy of real data if you're running a SaaS or something like that. But, you know, in our small example here, basically we have been able to, you know, build a couple of different components, run them through testing in parallel, stop at a QA stage to see, you know, to make sure, to verify that our build is good and let the business essentially decide that it's time to go to production. And so you'll see here basically what we've deployed. We've, you know, again, stashed the exact contents of the builds between the client and server in this pipeline and we've deployed the client.js index HTML and server war which don't really have anything in this example, but that's sort of what we're looking at. So that's great, Keith. So that's like a real world example. Exactly, yep. Yeah, so of course in the real world, you'd also be putting commit messages into each of those changes, right? Right, well, I was just actually, that's a good point. I'm just kidding. I'm just giving you grief. It's okay, I'm just saying. Yeah, yeah, exactly. You know, so I'm editing the Jenkins file, not really the code so much, but absolutely. Yeah, I would tell what I'm doing there to show. Right, exactly. And this is the beta version of Blue Ocean, like you were saying. When can we expect this to go into the main release channel? This is really great. Yeah, so this is definitely gonna be out in the 1.3 release, which is imminent, but I can't promise any dates or anything like that, but I would say within the next few weeks, you would definitely see that in the mainstream. Excellent, excellent, thank you. Sure thing. All right, so that's our last presentation for this online jam. Thank you again. We'll be publishing a follow-up blog post on Jenkins.io with the video and links. As a reminder, Jenkins.io is a great place to catch all the latest updates from the Jenkins project, including the latest declarative pipeline 1.2, which includes support for parallel stages, which the beta of Blue Ocean that we were seeing today depends on and uses. So for more up-to-date links and videos, follow JenkinsCI on Twitter, and also go to the IRC channel. Hashtag Jenkins, let's see here. What else, thanks to all our speakers who joined us today, Oleg, Michael, Thorsten, Steven, Mark, and Keith. Also, thanks to Alyssa for organizing another great Jenkins online meetup. If you're interested in joining a local Jenkins area meetup, check out meetup.com slash pro slash Jenkins, or go to Jenkins.io on the participate page. There's information on how to start your own meetup if there isn't one already in your area. All right, thanks. Thanks very much for watching.