 Okay, head around who watches this video. Today we are doing a knowledge transfer about custom work package. It's a special tool which allows to package ready to fly Jenkins images as work files, as docket images, or as Jenkins for the runner images. And we have four contributors on the call, me, Rik, Sladin, and Christian. So just to provide some context, I forgot to share my screen, right? Okay, and just to provide some context, we have a number of cool summer of cool projects this week. Yeah, and one of the projects is related to a custom Jenkins distribution build service. So basically a way to build Jenkins custom distributions and to share these definitions. And at the moment this project he wants to be based on a custom work package. So this is a tool I'm presenting today so that the project team can get more information about how it's organized, because most likely we will need some patches on this tool. Just a second, I haven't remained myself. Okay, just to provide you some history, a custom work package has actually started as a test tool. So we have a number of test frameworks in Jenkins like Jenkins Test Harness. It's a unit and a functional testing framework, also acceptance test harness, which is another integration testing framework, which is based on various technologies, including Selenium, Docket, and big test frameworks for Jenkins as a work file. So they are designed to basically start a work file, do some configuration and get running. And there was a long program for us when we're developing fluggable storage and other components. We do a modification inside Jenkins. For example, we changed the configuration of the system, but we still want to run a standard test suite. So one of the ways was to just write a lot of code but other way was to actually supply Jenkins for files and blackboards, so that to embed all the configuration, all the plugins we needed right inside the distribution. And this is the way we have chosen because it provides a lot of additional benefits for users. There was a blog post, which is currently screen sharing, built your own Jenkins, introducing custom work docker packages. So it has been published in 2018 right before Jenkins fold and this blog post basically summarizes how we could build it. So custom work package is a tool which is driven by a single configuration file. This configuration file allows you to specify a work file you would like, a set of plugins, also additional piece and properties you would like to define in the Rubyhooks or Jcast, it's Jenkins configuration as code plugin. And now there are more features. For example, for plugins, you can not only use released versions, you can also take versions from incremental repositories like from pull request, you can just build the component on your own using commit specification and the custom work package will produce a component for you. Same for work file. So here it's not documented, but actually you can also write components. Well, it's documented somewhere here, maybe in the demo. Actually, let's open the repository and I will show it there. Okay, so we have custom work package repository and here we have a number of demos, which can also help. And one of the demos is for example, all latest core, which shows how you can of the right particular components in Jenkins. So it uses maybe, so let's take this one. So here you can see that I use non-standard Jenkins core, where for example, I write a remote in library, I write a step by recent versions. So I can just build everything from master branches and have a kind of boilerplate for the entire Jenkins setup. And obviously I could test this boilerplate setup. So it's one of the purposes of this repository. What I have been referring, yeah, we have Jenkins as harness, we have acceptance as harness, and also we have plugin compatibility tester. So plugin compatibility tester is also a tool, which basically allows to take a set of plugins and run the test against each other and against the Jenkins core test units. So again, you can add consumer work files at the end. And that's why there is a lot of options for integration testing because we actually use that in Jenkins test pipelines at the moment. So this is how it looks like from the interface part. Yes, you just define a single configuration. You can also define, some of this is a demo for configuration as core. So here you have a number of plugins you install. All of these demos are a bit obsolete. It would be great to update them, but they should work. And here, for example, we inject the configuration as code YAML. So you can inject the test text, but here you just take the directory with pass. And here, for example, he's cost YAML, which basically uses Jenkins configuration as code plugin to do some configuration. So here it's authorization strategy, it's security realm, it's used to users and other configurations. And we can add more using the J-Cask engine. Same for Groovyhooks, you just provide the pass and everything gets packaged directly into the Word file or in Docker image. So it can be redistributed as a part of the package. Okay, any questions before we start having to enter the code? Okay, that's good. No, no, yep. Okay, so one disclaimer which I should make, right now there is customer package the one to text page. And there is a 2.0 alpha where you plan to do a lot of breaking changes in order to provide better configuration, et cetera. And I do plan to release the final version of 2.0 until this summer is over. So if you need to change something in custom Word packages during your J-Sock project, you're not restricted by binary compatibility, by configuration compatibility and we can apply more drastic changes. So I deliberately do not change that. And if you need to contribute something, this option is open. Okay, just a second, yep. So let's go to the code then. How it's implemented. It's actually a multi-model Maven repository. We have just a few models here. So this model is main library, also CLI because customer packages was initially designed to be used as library as a CLI tool. And also it provides a Maven plugin. So you can put it into your Maven builds. We have Jenkins-FailerRunnerTest. It's basically a part of the Jenkins-FailerRunnerTest framework which allows to do additional verification of the stability and to ensure that Jenkins-FailerRunner packaging is still operational because it's a lot more challenging than the standard packages because of many moving parts. Custom Word packages will basically rebuild Jenkins-FailerRunner repository for every custom distribution. Okay, I will open my IDE. Just a second. Why does it show Jenkins-FailerRunner? Because I opened the run repository and there's tools, customer packages. So we may need some time. Is the screen big enough for you? Yes, it's big enough. Okay. So it's full-casual, so there might be some issues but we can proceed. First, I'll start from CLI2 just to show you how the interface work. So in CLI2, basically it's just two files. So first file defines a lot of CLI arguments. So for example, you can supply configuration, you can supply Maven settings. For example, if you want to use a mirror, like we do it on CI Jenkins Ion, then various temporary directories, versions to be set, et cetera, et cetera. Results support for build of materials. It's a JEP 3.0.9, I believe. So it's another YAML configuration format for defining plugins. Builds support environment, mostly for Bones structure as well. And then you can install artifacts if you want to build an appeal and use custom update center if you want to download custom plugins. You will talk about some of these options later, but it's just a configuration interface. The most of the configuration still comes from the config file which I've shown you in the beginning. And the main file is actually pretty straightforward. It just processes all these options, creates a configuration for custom work package and then triggers that. If you take a look at a Maven plugin, it's basically the same. It just provides Maven plugin flavor, but basically it has two modules, one for packaging, another one for build, and basically all of them just pass configurations. So yeah, it's different code, but the logic is the same as the AOC like. Now let's go to our main repository. It's custom work package library. So this library, it's basically the tool itself. This library is pretty big compared to other components. So what it has? It has configuration and your configuration, you can see that there is a lot of files. Why is it done that way? We use a snake YAML in order to do configuration mapping. So basically we find a number of these config classes and then everything gets converted automatically to our objects. So that's why there is a structure. Okay, so here is our root element for example. Here we define that we want some build settings. We want to define a package like version, et cetera. We want to specify a work config. We want to specify plugins, assistant properties, groovy hooks, configuration as code. So basically what you have seen on the top level. And all of that expands to different YAML files. So you can see a lot of them here. There is also essentials YAML file which has a bit different format. It's being used on CI Jenkins IO. It was started for Jenkins Evergreen testing. We still have some components which we use this YAML format, but in principle it's just a bunch of configurations. So what else do we have here? I think that we should start from the builder class. So builder class is actually a heart of Jenkins Cloud Runner and this class isn't pretty. So this class, I mean this project, sorry, I have to answer. Sorry again. Okay, so here what you can see that, so builder, it just takes conflict how many lines we have? Just 300, so it's not that big. What it does, actually build, it's the main method. So what it does, it just takes configuration. We will need temporary directory because some of our packages creates a lot of files during the build and we use this repository. They will create build route. They will load some configuration files. So there are multiple ways to supply configurations. So the first way is to just package a conflict YAML. There is also build of materials and there is POMX ML. For example, what I do in my instances, I will open, for example, yep. The second is the record's Jenkins Server. It should be good reference implementation. So here, if you take a look at the package of conflict, you can see that there is no plugin definitions. Why? Because I actually supplied it through POMX ML and POMX ML is managed by a dependent board and other tools. So here you can see that there are dependencies supplied by a Jenkins plugin bomb. So yeah, I don't really manage dependencies. Here as well, except a few ones. And yeah, this file fits the configuration in my keys. But the default way is to actually use package of conflict. And right now in the custom work package of 1.0.x, you can actually stuff on the one way of supplying plugins. You can supply them from both YAML files and POMX MLs, you can do overrides and other things. So it's something in the scope for the next session. So after reading all of the stuff and after reading of default configuration, we verify this configuration. We mostly check that we have a way to retrieve. Let me just show it to you. So yeah, basically we check whether a Jenkins plugin is installed if a Jenkins section is supplied and we could have more verification configurations, but right now we don't check that. So the logic is quite straightforward. And as developer tool, if you misconfigure something, most likely your instance wants to start up with some awkward exceptions. So here what we do, we then we build current plugins. Again, if no configuration, we just fail. Say for here, I'm not sure why it's not in verify config. And then we start preparing our environment. As I said, we can build components on demand or we can retrieve components, for example, from pass instead of update centers. So here there is a logic which basically goes from these dependencies and builds them if needed or copies them if needed with configurations. So it just prepares the environment. If you're curious how it happens internally, yes, there is a lot of Maven magic, but basically customer package under the hood it involves a lot of Maven things. So same for library patches. So here we just prepare all the built environment. And same for resources. For example, if you have groovy scripts, if you have JCask files, et cetera, again, we extract this resources technically we can build this resources or check out them from external repositories. Not all of the source options are supported right now, but it works. And what it produces, let me find the demo. And this, I'm just trying to find something where we have produced built parts. Yeah, okay. So basically we get, so this code also doesn't build anything. Okay, so what happens if on these stages we just create all the start blocks, we cache them in Maven. And after that, the next stage begins where we actually generate a poem. So it's a pre-built poem. A pre-built poem means that it will actually define how we build the work file for an instance because we need to package everything inside. And here you can see that basically it's just another Maven for XML. We could generate that. And here, for example, we put work files, we put plugins, which we want to inject. And after that, we just invoke a Maven HPI plugin, which produces a custom work. So it's a first stage of the build. That produces a work file, but this work file isn't ready for final consumption because we still need to update some patches. For example, we need to inject system properties. We need to inject everything in the function. So basically you can see what happens here. So we take this pre-built work file, we explode that. So here, and after that, we start replacing resources. So for example, we add system properties in the configure. How we do that, we just modify a WebXML file inside the configuration. So here, if you go to web-info, you can see that there is WebXML file. And here, we just do some modifications. So you cannot see them because likely we didn't do it for this demo, but if needed, we will just inject additional entries in this file so that our configuration is applied. Just a second. Yeah, there is no example right now, but it just happens through the standard system properties engine. Do we have it? Yeah. So yeah, I should have built at least one demo before. The repository, okay, let's try to build that. Meanwhile, do you have any questions or are you completely lost? You know, so far so good, I guess. So for example, let's try to build the demo. Okay. So here, just make clean build. Clean. So yeah, here's the work package in practice. So it's a CLI tool with a lot of main things. Hopefully we won't need to build anything for this demo. So here we just started this package and work right away. So we can just go to this repository and we can follow the built ideas. So yeah, here we took this package config so we packaged everything in the pre-built, still builds everything. So yeah, it will take a while. So nothing is really fast here when you run on Windows and CentroVirus. When you run on Linux system or when you run in Docker, it's quite fast. But yeah, by default, it just produces insane amount of temporary files. So if everything needs to be checked with Antivirus, it's quite long. Okay. So let's go back. We have our file is produced. So this is a file which was produced for this demo. We share, we use for pre-packaging but here's our exploded work file. And in this exploded work file, let's take a look what the conclusion was in the region. Yeah. So in the bottom, you can see context param param Jenkins install wizard. Obviously, when I was writing tool, I didn't bother to pre-tify XML. So it's something like that, but it works. So we just inject parameters and these parameters will be consumed by Jenkins engine. So that on startup, it will consume them and the right system properties. Right now, this engine is available only to plugins but in the recent weekly, we made it genuinely pay for plugins. So we can configure the plugins in the same way now. Okay. So we built our work files and we can also take what else changed there. So for example, if you go to web inf, you can see that we injected Jenkins YAML-D directly inside the web inf. So it's a resource within a work file. If we had Groovy hooks, I guess we don't have a Groovy hooks for this configuration. Yeah, we don't. We would have injected them into a web inf as well. So this basically produces our work file which is alternate for execution. And after that, we will still need to build this file again because we need to package this work file into a single distribution. So let's go back to our builder code. So yeah, there is a lot of logic here but basically we hit something like that. So you may also see that we also produce a bomb file. Right now it's not really needed like this. Where did we put it? Okay. So yeah, here's our bomb YAML file. So basically it's a JEP format which includes configurations and plugins we would like to include. You can see that this bomb format is a bit messy. So for example, the plugin installation manager tool has a better definition. So in the future we'd rather prefer to generate formats which are close to plugin installation manager tool. But still it kind of works. So what else happened in this phase? Actually nothing specific. Then we built Docker if needed. Docker is also a default functionality. And for Docker, yeah, there is a lot of magic but basically we could generate a Docker file. So here's a Docker file. Again, nothing pretty here right now because in this Docker file, we just need to write a work file by our produced test example. So it's easy and it's quite straightforward. And after that you have options to even keep these files as is. For example, if you run inside Docker you cannot just build Docker inside Docker. But if you want, you can actually build it locally. So for example, this demo actually runs the build. So if you open my build log you can see that actually the script also invoked Docker build and produced the final image for me. So yeah, right now there is nothing truly complicated there. So if you open the code you can see that there is Docker file where it just passes the work file and actually that's it. So and it takes the work file and checks it inside. It's not that good for performance. It would be better to unpack everything. We do it for Jenkins file runner but right now we do not do it for Jenkins work because we follow the standards format. In the future we could do much better here. Okay, so let's try to launch this demo if you want. Wish, so do we have something? Okay, so basically if you take a look at the Docker file it's just transit. So nothing really interesting I won't even spend time. But yeah, this is the idea. We can inject code directly into a work file so that work file is redistributable and then for Docker images we keep updating that so that again you get a Docker image with everything embedded. It's a bit more complicated for Jenkins file runner. So for Jenkins file runner it's not enough to just add components because Jenkins file runner manages extension points. It also has some caches and it will have more caches to feed up the startup. So long story short, the depository needs fully built. So when you build with Jenkins file runner just then you have a demo for Jenkins file runner I want to launch it. So here you can see that we actually specify not only Docker base we also specify Jenkins file runner base but again it will rebuild the repository and by overriding properties, et cetera. So if you're familiar with Paginton the test code is approximately the same. So here it just creates settings it injects the configurations it sets additional options for main build it generates the image and then it generates Docker file and again it builds it. Just a second, actually I can show it. No, I cannot because file son of oopsies that here. But here you can see some examples for example this is checked out repository for that and for output target. Yeah, there is basically Jenkins file runner produced as a build. So if you want I can launch it later but yeah, right now it's out of the context let's say it will just produce a binary or a Docker image. And this flow is actually updated significantly because right now there is ongoing work on Jenkins file runner. So for example, last time I modified the flow it was around one year ago and after that we introduced a number of features including Docker packaging, including back layout including vanilla images and dependency management. So most likely customer packages needs a significant update to be effective for Jenkins file runner but it's technically available. If you wish to try, you're welcome to do that. I know that Rick uses both customer packages and Jenkins file runner in his CLI. So maybe you have already hit some issues and if you want to contribute something more than welcome I spent a significant part of the last months on recovering the Jenkins file runner build flow which was a bit abundant. Okay, let's go back to customer packages. Any questions, any additional information I could provide to simplify your project? So I'm guessing that this would be included as a library right inside the customer distribution service because we don't just the YAML file and we can just do configuration dot builder and then provide the YAML file. Yeah, so it means it would be used as CLI. In addition to CLI, we have a Docker packaging for Jenkins file runner. So here just the package in the Docker builder. So basically it packages Jenkins file runner, sorry, customer packages in Docker. It also offers some additional configurations. This image can build your work files and source docket files. It doesn't really build anything inside in addition to that, just if this docket file. So here you can see that you basically inject a customer package, we have entry point which is it's CLI. So the usage is quite straightforward. You just pass it as a volume. Just a second, should be in documentation like this. Oh, maybe not. I was pretty sure that it's actually in the documentation. Just a second. Now actually the usage in Docker is not documented right now, but it's pretty much the same. You just need to pass volumes to write place. And then the tool will build the image for you. So if it helps in your environment instead of using customer packages as CLI tool, you could use it as docket image. But in principle, it's your choice. It doesn't have too many advantages on its own. So what else could I do for you? Oh, one more question I would ask was that would we be making any, I think we raised this question in the previous meeting which we had. So once I download the Wi-Fi, I cannot make any changes to the versions or the plugins and they cannot be upgraded. So I think if you could elaborate more than that or is that enough? So is there any work around for that? Yeah, I want to bring a solution to my business. So my team, but we want to have a solution we hope for that the bundle of the plugins can be upgraded. Yeah, maybe we can support this bridge future feature of, maybe we can put those plugins not as a bundle of the plugins. So do you have any thoughts about this, Oregon? Yeah, so let me just show you one of the, let's just go to the repository of what we've shown today So here we use dependent bot. So dependent bot is used to update these dependencies. But first of all, number of dependencies is going to shrink because we move everything to build off materials. So hopefully I won't need to manage the conflict too much. And another thing that in the future, we may not need dependent bot for that at all. So why use dependent bot? Because I just cheat and I use existing tool. If plugin installation manager supports updates on its own then you just won't need that. You can create your flow without dependent bot without similar tools. So just by using plugin installation manager tool which can show updates right now. But it's quite trivial to create a code which would submit patches. And for example, I already did maybe in plugin for incremental versions. So for incremental versions right now you can just use invoke, sorry. You can just invoke maybe in plugin and it will update versions in your plugins TXT. And well, it's not a rocket science. We can create something like that for YAML files and to use it efficiently. So you don't really have to use Pomex ML in order to implement plugin updates. It was just a bit convenient for me. And it's useful for example, in this particular case because I use this Pomex ML not only for customer package. I also use it for development and because this repository is largely configured with Groovy hooks. So for example, you can see scripts, et cetera. And the magic here that in scripts is actually a maybe in submodel. So when I open this project in IDEA I get automatic syntax highlighting, auto completion, et cetera for Ruby hook development in Jenkins. And moreover, I can even debug these hooks. So that's why in this repository I will like to stay with Pomex ML approach. But in general, it's your turn. It's your choice. Okay, so once we have the YAML file we can say that we can develop it and maybe do just update the YAML file of whatever plugin versions are there as in when the plugin updates become longer. Okay. Yeah, there's a lot of tools which allow patching YAML files. For example, already created a tool for Jenkins infrastructure. So now if you go to CLA or for example, Jenkins, for Jenkins, for repositories. Yeah, there is a lot of YAMLs there and these YAMLs automatically update it using the tool. It's by Olivia, so it's just a matter of coding. Okay, sounds good. So do you foresee any patches being made to customer packages for this project or is it pre-used as it? I don't know. So it really depends on what you need because for example, if you want to add theme management then we will likely need to support something like that in customer packages. For example, we can check whether a simple theme plugin exists and if yes, put a file to proper locations. Same we could do for pipeline libraries. So pipeline libraries could be also integrated into bundle. So that you will get configuration as code. Right now I do it one another on my instances, but it would be cool if it was a part of customer packages. So for example, if in your custom Jenkins distribution service you support managing pipeline libraries, then again it could be a good option for you. Okay. Yeah, those were my questions on my end. I'm good. Rik, Risenan? Another from me? Yep, neither. Yeah, so if you want to see more examples how customer packages manages repositories. So for example, there is Jenkins Iorunner. It's a repository for Jenkins Iorunner packaged by customer packages. And most likely right now it needs a serious update because you haven't touched it for a while and the plan to update it, but in principle, each of the work and you can see how it's used in practice. So for example, there is pipeline libraries, there are Groovy hooks and this instance basically try to emulate Jenkins Iorunner and then it also uses Pomex ML. So it's not really aligned to if you asked me about plugin versions. Just close the latest versions and this repository hasn't been immigrated to bill of materials yet. I guess it's my next stop. This is an up-of-bounds disaster. Okay, it's pretty much like that. So yeah, if you need to make any purchase in customer package again, I leave all options for completely breaking changes open. I'm not sure how much time will I have to work on customer package on my own, but yeah, maybe I will be doing something mostly in the scope of a Jenkins Iorunner because I want to improve the flow in order to improve performance of Jenkins Iorunner. Okay, anything else for today? Okay, so hopefully it provides you enough information. If not, I will try to periodically join the meetings and I'm available in the charts. Then I will just stop the recording and I will publish this video maybe tonight. Then, Sladyn, if you did notes, you're welcome to contribute some patches to the repository's documentation. We can keep it up to date. Okay, that's it. Thank you, Oligo. Yeah, thanks everyone. Thanks everyone. Thanks. Bye. Bye.