 and we are live. So just in case I will start recording on the baby side. Okay, and we are good to go. So thanks everyone for joining the Jenkins online meetup. As you may have noticed, we are trying to out the new platform today. So instead of using Zoom and meetup.com as usual, we are trying to out community CDF resource. It's a new site that will be rolled out within the next months. And we have an online broadcast on YouTube and most of you are watching us on YouTube. So for this presentation and for this meetup, how we will organize. So first we will have a presentation by Katarina and Martin. They will do all the presentation demo. If you want to ask any questions, please use the chat in YouTube. And I will be collecting these questions asking during the presentation or afterwards. And then we will have some time for open Q&A after the end of the presentation. As I said, we have a backend on baby. So if you want to join the discussion live, please let me know and I will send you a link to the backend. So that's the plan. And thanks Katarina and Martin for joining us for today's meetup. Sorry for any inconvenience because we run the platform the first time. And I'm looking forward to your presentation. Okay, yeah. Thank you for the introduction. Let me just share my screen. Okay, then, hi and welcome to our talk, kick-studying CIBS practices with Jenkins and autonomous teams. My name is Martin Sänger and today I'm joined by Katarina Sieg. We are both software engineers at Dynatrace and work in engineering productivity. In engineering productivity, we focus mainly on internal tooling as well as CI and CD topics. First, let's have a look at the agenda of today's talk. We want to start with the current state of Jenkins usage at Dynatrace. Then we want to talk about the build environment and all its components that are used. We also want to talk about how we deploy and test our instances and how we do the maintenance of them. There will also be a live demo and at the end they will then outlook what is planned in the future. So let's talk about the current state. Currently, there are around 1,200 developers at Dynatrace. In a two-week sprint, we trigger around 400,000 builds resulting in 246 million test executions. To tackle this, we have small independent Jenkins instances per team or per department. This helps us to scale better as teams are able to maintain the Jenkins instances themselves. Each one of the instances will also get its dedicated Kubernetes namespace and we try to use configurations code instead of UI configurations. So all in all, we currently have 37 Jenkins instances with Mortal.com in the future. Now let's talk about what we deliver to our teams. If a team requests the Jenkins from us, they won't only get the instance, they will get the whole ecosystem for building their projects. First of all, the SD Jenkins config is code instance. This is just a Kubernetes part and with that comes also a dedicated vault engine. Vault is our secret store, we'll talk about that later. So there's also the Kubernetes namespace for spawning the executors and we also create the automatically generated Dynatris dashboard with all the relevant metrics. If you look on the right, you can see such a dashboard. As you can see, there will be an HTTP monitor as well as the Jenkins queue size, the garbage collection suspension time and some hosting metrics for our Kubernetes executors. We also provide lots of onboarding material like recordings, written documentation, code samples for the most important config is code parts and there's also the option to a Q and A session if there are any questions left unanswered. Now let's talk about the build environment. So first of all, we use Jenkins in combination with configuration as code. We decided to use configuration as code instead of UI configurations because UI configurations are very convenient but we found it very hard to find vaulty configurations or misconfigurations of the instance. So each of our Jenkins instances just gets a dedicated Bitbucket repository and in this Bitbucket repository, Devs are able to just change the configs, add new jobs, add plugins, et cetera. The main benefits for us with Jenkins config is code are the audit log, the forize principle. So each change has to be reviewed and approved by another Dev and also the easy maintenance. For the ease of maintenance, we also have another big component there the project initializer, which I'll talk about later. Then there is Q and EDS. We use Q and EDS to host all of our Jenkins gas instances as well as all of our Jenkins workers. The clusters are hosted in AWS and on-prem. For on-prem clusters, we also have the option to do cloud bursting there. So if the load is high, we don't want to stall the pipelines, we will just provision additional resources from AWS. Next up is Vault. Vault is our secret store. It helps us to store all kinds of secrets like key value pairs, AWS credentials, gig credentials, et cetera. It allows us to give role-based access control for secret engines and also to grant secret access for builds via an API. This is the great benefit from Vault. We used secret server before and with secret server we had the problem that there wasn't any API. So we always had to double create secrets, which means we would first create them in secret server and afterwards always create them in Jenkins as well. The problem there was that sometimes not even the same dev would create the secrets in secret server in Jenkins. And with that, once the secret got rotated, it only was rotated on the secret server side and we had some broken builds in Jenkins. With Vault, that's not the case and we have only one single source of truth. We have also introduced Vault Configures Code. Vault Configures Code is just a feature for our devs where they can define secret engine definitions as well as policy definitions. And once they are approved by us, they will be applied automatically with Terraform. Next there is Harbor. Harbor is just our internal Docker image registry and we use it to push our custom Docker images which we make a data trace. It also has the feature to do a security scan, to scan for vulnerabilities and also we can define retention policies there which is quite nice to keep our repositories clean. Then let's talk about the star of our show, the project initializer. The project initializer aims to make the bootstrapping of new projects as easy as possible to achieve that it allows users to generate new projects from templates. This is a self-developed service and as already said, you're able to bootstrap new projects from templates. Devs are also able to just create new templates themselves. If you think about the template, just think about the plain Bitbucket repository with some placeholders in it and also one central configuration file. In this configuration file, you are just able to define Bitbucket repository settings, Jenkins webhooks, you could also define default reviewers and some placeholder data. That's not the only cool feature of the initializer. One of the key features is also to keep the projects at the newest state. So once any projects are created out of a template, they will automatically get updates when one of the template file changes. We do so with automated pull requests to open the Bitbucket. If for any reason you don't want to have some files updated into a project, you can just put them into the initializer ignore file. This is just like a gitignore file and the initializer will then ignore updates from these files. The project initializer also helped us to ease permission requests for new repositories. So with new repositories, back then we had lots of tickets with Devs just requesting new repositories, but now they can just do that with the initializer and we just have to approve a pull request and everything is fine for our site, then we will be able to merge them and the repository will be created. Now that we have talked about the build environment, let's talk about the deployment and testing part. So first let's talk about the instance creation process. With all the components that I've mentioned before and lots of automation in place, we managed to bring down the work that is required to set up a new Jenkins instance to just 1.5 hours. The work for that is the following. We first create a dedicated Bitbucket service user and also Jenkins admin group. The dedicated Bitbucket service user just helps us to be more stable. Let's think of one scenario. You want to do something and add the new Bitbucket service user, but with a wrong token. With that, you get many bad login attempts which will lock the user in Bitbucket. This is a problem, especially if you only have one user for 37 instances. In our case, the worst that can happen that one instance is blocked and we immediately find out which build or which instance has the culprit in it. As a second step, we will create the Jenkins and Vault setup via project initializer template. As already said, this is only setting some placeholders, the rest is automated. Katty will also show some live demo later on how that is done. As a third step, we will create some dedicated Kubernetes namespaces, one for the instance, as well as one for spawning the build executors, the pods. We plan to automate this as well in the future and to do so, we want to do it with Agust D as well as Crossplane. As a fourth step, we want to create the Jenkins jobs to test and deploy the instance. Once the instance is deployed, we will verify that everything works as expected and hand the instance over to our devs. We will of course also provide some onboardings for our new Jenkins admins. To scale better there, we have prerecorded some onboarding missions and there's also the option to do a Q&A session. Now that the instance is created, let's talk about the deployment process of our instance. So first of all, some general facts. So we have one big Jenkins instance, our Jenkins Jenkins, for spawning all the other instances. And also all the deployment logic that we'll talk about in the next slides is stored in a shared library. This just helps us to not duplicate in a code and also to change important settings just in one place instead of in 37 places. As a first step, we will prepare the config files. We will just merge the base and extension files. Now you will ask, why do we even need base and extension files? As already said, all of our Jenkins instances are created from our project initializer. And with that, there comes automatic updates. These automatic updates are only done on the base files as the initializer only has the base files in it. For these base files, the initializer is just able to open automatic pull requests and for us to unblock the initializer to do that and to not to prevent any merge conflicts, devs are only able to use extension files there. So they can just use the extension files and personalize and extend your basic setup there. And the whole setup of merging everything together again is powered by YQ, which is a CLI tool for processing YAML files. Then there is vault configuration and the secret injection. So this is the part where we apply our vault config as code to config files. As already said, devs can create their own secret engines or grant permissions to them via config files. All of that is implemented with Terraform and yeah, as a second step, we also do a secret injection there. So secrets that are needed for Azure AD usage or middle server current fix are getting dynamically injected to our community's namespaces of the instance. This just guarantees us that the secrets are always up to date and that we have a working and correct environment. Now there's the build and push doc image step. So each Jenkins of us gets its own custom doc image. The base image for that is just the normal Jenkins image with the LTS tag. And we had some startup scripts that are needed for cleanup and testing. And we'll also add the desired plugins which were chosen by our devs to the doc image. Once it's done, we will push everything to the internal doc image registry, Harvard. With all the preparation done, we're now able to update our production Jenkins instance. But before we do that, we want to test everything on a test instance. So for the total reason, we just spend up a test instance and do some automated testing there via current statements and startup scripts. The testing there just is about looking if all the secrets are set, if the cloud config is right, if the instance starts up, of course, if all the plugins were installed that should be installed and stuff like this. If that's all checked, we'll downscale the production instance and we'll set the new doc image as well as the Kubernetes config map. Then we'll upscale the newly updated instance again and test if everything works on our production instance. So of course this is already also automated again. This whole process of testing on the production instance and now it is very unlikely that something fails there because we already did the whole testing on our test instance. But if that would be the case, we have an automated role back in place. So if some of the tests fail, we'll just pick the latest working doc image and the config map configuration and apply this one instead. So this is one of our big goals and we achieve that it is basically not possible to break an instance with a redeploy. As the last steps, we will create an updated dashboards and do some cleanup as well. So first we automatically create the dedicated Dynatris dashboard with the instance metrics. I have already shown you that dashboard before. We will also create a newly created instance. We will also add the newly created instance of course to our overview dashboard. The overview dashboard is just a tool for us at engineering productivity to see, to have an overview about all our instances, have them all on one page, see the performance metrics, et cetera. And as a last step, we will just clean up the testing space. So clean up the whole test instance setup first to free resources and also to have a clean state for our next instance. With that, I'm already at the end of my part of the presentation and I want to thank you. And also I want to hand over to Katharina. Maybe before we press it, just one question will be good from the chat. I forgot to turn on my video. So do you share Kubernetes worker notes between Jenkins instances? Or how do you manage that? No, so as I already said, each Jenkins instance gets its own Kubernetes namespace and we don't share between them. Thank you. And do you have any agents outside of Kubernetes clusters or are all agents unified at the moment? Yeah, so we're trying to have all of the agents in our Kubernetes. We have some bare metal notes as well currently, but try to get rid of them. Makes a little sense. Thank you. So if you have any questions, please ask in the chat on YouTube and I will pass it to the speakers. And now the floor is yours. Okay, so thanks a lot. Also hello from my side. And Martin was just talking about how we are deploying our Jenkins instances. And I would like to continue a little bit about when we are deploying them. So first of all, we are doing a weekly redeployment of our Jenkins instance each week on Sunday night. And the reason for that or the main reason for that is that we always want to stay up to date with the newest LTS version of the Jenkins base image we're using in our deployment. And further, we also want to keep plugins and stuff like that on the newest state so that we always get the latest security fixes in and are on the safe side with this one. Further, we are considering the configuration as code repository on Git as our single source of truth for all Jenkins instances. And therefore we are making sure to do at least once a week sync with the configuration as code. And the UI somebody may have configured which shouldn't have been done. And therefore we are just syncing them each week. Of course, if you're deploying a feature it may be quite cumbersome if you need to wait until next Monday to get your changes applied. And that is why we have second job running which is doing an automatic redeployment during the night if there have been changes to an instance during the day. Of course, this is not redeploying all instances but just the one that had changes or the few ones that had changes in that case. In some cases that may also last too long to get all your changes in. So we also allow Jenkins instance admins to manually trigger a redeploy for Jenkins instance but we are not doing immediate redeploys automatically because unfortunately we are not able to redeploy Jenkins without a few minutes of downtime and we don't want to stop the productivity of our engineers during the day by redeploying the Jenkins instance if it's not really necessary. You can see that we already put quite a lot of work into this or quite a lot of thoughts into this workflow but still if you think about it it can be quite cumbersome for developers to work with this. And that's why we added the seed chip. We experienced within the last few months that once a Jenkins instance is up and running and builds on it the most changes happen to jobs actually and not to the configuration of the Jenkins itself. And that's why where the seed chip comes into place. The seed job, sorry, scans a certain folder in the Git repository for new job definitions or for changed job definitions and then immediately updates jobs or creates them accordingly. And this can happen without any downtime of the Jenkins instance. So this means you can simply merge your new job definitions and then you run this job and you're done and you can already work with your jobs. Before developing new features you may also want to test them of course. And that's why we spawn a test instance for each branch that is made in a configuration as code instance. So if you check out a configuration as code instance create the branch, push it you will immediately get a test instance for this one. And of course also if you push new changes to the branch the test instance will be redeployed and you can immediately use it. What you can do in this test instance is you can click around there you can view the configuration you can even run the seed chip to see if your jobs are loaded correctly. The only things you cannot do is you cannot run builds because we don't want any builds to be run on test instances we all want them on production instances and therefore also no Kubernetes cloud definitions are available on our test instances. That's it about deploying and testing our instances but how do we actually maintain them? First of all, when a team requests a new Jenkins instance they immediately have to provide us with at least one person who is responsible for administering this instance. With this we're shifting away the responsibility of configuring the instance from us to the team that requested it. And this is very important for us because we want our teams to be for being able to work autonomously and don't depend too much on us. Of course we're providing them with help on their way and with assistance but we don't want to force them to work in a special way which is giving them some defaults and some training in the beginning but then it's up to them how they want to use their Jenkins instance. To educate these people we also created a new community with Indinatrace it is called the Build Guild and each month we're offering some sessions and workshops about CI and CD related topics to interested people. They don't have to be Jenkins admins but lots of them are of course in the Build Guild as well. And within the Build Guild we don't only want to educate people but we also want to enable people to learn from each other and also we want to learn from people. We want to hear their experiences what problems they have and so on. Further we are always trying to keep the questions people have very well documented. So we are encouraging people to ask them once they go over for enterprise which is our internal Stack Overflow instance and we're being very active there as a team. So we are watching the tags that affect our team and usually within a few minutes or hours people get answers from a subject matter expert if they have questions about Jenkins or some related topic. And the other nice thing about this if somebody in the future has the same problem he or she can simply search Stack Overflow and we'll probably or hopefully find a question or article about that because somebody already had this problem in the past. Further there may be some changes that are not made by the Jenkins instance admins but by us as a team. These changes are usually some changes we always want to roll out to all Jenkins instances. For example, the seed shop we mentioned earlier is something like that. We just implemented it a few weeks ago tested it on our own Jenkins and then once we verified everything is working in production we wanted to roll it out to all instances. And this is something we're doing by the project initializer. So all our Jenkins instances have either been created from a template there or have at least later been linked to it if they have existed before the project initializer. And now the initializer knows about all our 37 configuration as code instances. And when a new commit to the template is made it says, hey, look, I got 37 Charles I will update them as well. And instead of manually cloning each file creating a new branch changing the files creating a pull request the initializer will do all that for us. And the whole work that is left is simply merging this pull request or maybe adjusting it if a team has a special configuration. And of course, as Martin already mentioned we also have the option to ignore full repositories or files if somebody doesn't want to have automated updates. But this is something we ask people to always sync with us because we don't want them to run into some problems that are not necessary. Once a Jenkins instance is running it is configured properly we are doing some alerting and build analysis. So first of all we have automated alerting setup via Dynatrace. So Martin showed you the dashboard earlier and by this HTTP monitor we have automated alerting as well. This means if a Jenkins instance feels unhealthy we immediately get a slack message and can vector it. Further we are using the statistics gatherer plugin on each and every Jenkins instance and with this plugin each instance is reporting its build and test results to our pipeline services tool. This is an internal tool we are using to collect aggregate and store data and we're having a few features that help us work efficiently on this tool. One of these features is the automatic issue generation. This means once a build fails we immediately generate a new Chiro ticket and if needed or if configured a slack notification about it and the developer will be immediately informed about it because a ticket has been created. If the same build fails again for the same reason we are of course not generating the same issue again because it's not our goal to annoy people we just want to inform them and to let them fix the bug as soon as possible. Further as you can see on the top right we have some test dashboards in there. In the test dashboard you can filter for classes configurations, branches and a few more options and you can see the test executions and statistics about them for the last two weeks. As you can see on the top right for this specific test class everything has been green, everything has been good. There have been 46k test executions for this one and you can also drill down on the classes. When you drill down you will see the methods and these methods will also link back to the Jenkins links again or to the Jenkins builds to not provide any dead ends there. We are also showing the most common errors in the test class. So if you open it you will see that this and that exception has been thrown 10 times and maybe five times the build failed because the pod couldn't start or something like that and it makes it very easy to analyze things like that. On the bottom right you see our pipeline cockpit. This pipeline cockpit is always showing the state of our five most important pipelines in the company. Just at one glance you can see if everything is going well or if maybe a pipeline has a problem like it's in the case in this picture. If a pipeline has a problem or if a build has failed you will see it here immediately. You will have a link to the build but also to the issue which generated to check if somebody is already working on that. We have some more features within our pipeline services but I think it would explode the scope of this presentation to show them all but just to name a few of them it is flaky test detection. We have some build duration statistics, we have information about configurations and so on. Yeah, I think we have talked a lot now. Let me show you a quick live demo about how all of these components come together and can work together. What we will do is something very simple and something probably lots of you have done in your daily work life already. We will create and build a new Docker image and we will use this image then in the custom Kubernetes agent in a Docker, in a Jenkins build. So let's quickly change to the Jenkins build. So we have, this is one of our Jenkins instances. It is the Innovation Day Jenkins and that's the instance we are using for new projects, for kind of test projects where we want to try out something and so on. And you can see I already create this folder in here which is called Jenkins Meetup Demo. It doesn't feel too healthy. You will see why in a second and let's just open that one. Here we have one multi-branch pipeline in here. It's called demo build and this build has one branch which is the main one with one execution. You can see that it failed because the, you will see in a second because the verified Java home stage failed. And I will show you the code for this one as well which is in here. And this build consists just of this very small Jenkins file which has one stage inside. It is the verified Java home stage and basically it just prints the Java home environment variable and if it doesn't contain JDK 17, it fails because it requires JDK 17 for some case. Then we have a pod definition or a Kubernetes agent definition up here which is using this pod definition. The most important thing here or the only important thing for this presentation is the image we are using here. It is placed in our internal harbor and it is using Ubuntu 20 with AMD 64 just a default image for builds. What we will do now is we will create a new image we can use here. To do that, let's head back to the project initializer and go to the Jenkins executor Docker image template. I already opened this in before and here we can create a new repository for such an image. We can select the base image. The one that I just showed you is pre-filled but we will use the JDK, oh it doesn't matter, JDK 7, latest as a base image here. I also opened this one already and in the Docker image you can see, in the Docker file you can see that there's not much done. It is using the same base image as I showed you before and then it is installing Java 17 and setting the entry variable Java home 17. So let's use this one as a base image. Now we have to choose a harbor project that we got created in. This will be EP in our case because that's the engineering productivity harbor project and we will have to choose a image name. I will go ahead with Jenkins meetup executor image here. Now it gets a little bit more interesting. We have to provide a harbor board path because we will need to push that image somewhere and where we need to push it is to harbor and therefore we had to create a robot account for that one. I already did that before and to not bore you with anything like that and simply stored it in our vault instance. It is in the innovation day Jenkins vault engine. As Martin told you, each Jenkins comes with a dedicated vault engine and it's in the path harbor credentials slash Jenkins meetup. I just created own credentials for this meeting. So let's copy this path. Go back to the initializer. Remove this show thingy and it should be good. Lastly, we need to add the Jenkins URL where the image will be built to make sure that the web focus configured correctly. So let's copy the innovation day Jenkins URL and play it here and click next. We could configure some default reviewers for the repository to create now but we don't need them for the demo. So I will simply skip this and now we can either choose to download the project as a zip archive or we can create a new bit bucket repository and this is exactly what we're going to do now. Let's copy the image name from over here and paste it as a repository name as well. And as a ticket number, I will simply enter no issue because I don't have an issue for this demo now. Man clicking create repository. First an error appears or a warning that the ticket number isn't a very huge error issue but we don't care about that now and simply go ahead and create the repository. One comment from the chat that the screen resolution isn't big enough especially for some windows and IDE. So if you could zoom in a bit, it would be much appreciated. Yes, of course. Thanks for letting me know. Is it better now? I think so. Okay. Thanks a lot. Now you can see that the repository hasn't been created immediately but the creation has been requested. This means a pull request has been created which is linked down here. And when we click on it, we can see that somebody, in this case it is me, is requesting an approval. When we check the diff, we can see that I want to create a new repository from the Jenkins Executor Docker image template and I entered this placeholders which was discussed together. Further, you can see that I want to create my project in the Bitbucket project JCI and I want to call it Jenkins Meetup Executor image. My ticket number is no issue and I haven't set any reviewers. Luckily, I'm an admin for this JCI project so I can just go ahead and approve this pull request myself. Usually, of course, it doesn't work like that and I can now merge it. And as soon as I merge it, the initializer will be informed about it and will create this repository. So let's head over to Bitbucket slash project JCI repositories and enter the repo name again. I will just make this a little bit bigger again. Yes, so here you can see the newly created repository and what we have in here is a very simple readme that should be extended if you're using your simple production, of course, but we don't care about that for now. We have a Jenkins file, a Docker file, a initializer ignore file where we can ignore the updates of certain files. In this case, it is the Docker file because once we adjusted it, we don't want it to be overwritten anymore. And some git ignore and CI folder. So let's clone this repository and check it out in IntelliJ. I will check how to make it bigger in a second. New project conversion control. Let's trust it because we just created it. Get it to the current screen. And now I'm opening the Jenkins file we have here. Increasing the font size a little bit. And you can see that this file was automatically generated. We're adding this header to lots of files to just make people aware that we are using the project initializer. Down here, you can see that we are already having a build for publishing the Docker image in place. We are using the vault secret we just defined here and we are tagging and pushing the image down here. There's nothing more happening or nothing really special happening in here. Then we have the Docker file which is based on this JDK7 image I just showed you. And we will just add something very simple here. We will set the Java home variable to the value of Java home 17. That's it already. Let's commit this with like, let's say a just Docker image. And probably the push will fail in a second as you will see now it was rejected. And the reason for this is that the initializer does some repository settings on default. So if we go to Bitbucket and check the settings, you will see that we have some branch permissions set up here so that nobody can push to main and release branches. We also have some merge strategies and merge checks configured. And this all can be configured in the template how the repositories that are created should look like. Let's go back to the branch permissions and do something we are never doing in production but will be okay for the demo now. I will simply add myself and allow myself to push to the main branch now. Let's also go to the repository permissions because as Martin said previously, each Jenkins instance has a dedicated Git user. It uses two for its Jenkins builds. And we need to grant this Git user permissions to the repository so that it can check out the repo. We only need read permissions because it simply checks out and does nothing else. And now we're ready to go ahead. So let's go back to our Jenkins Meetup execute image, try to push it again. And now it should, yeah, now it worked. And now we somehow need to get a build for that one. I showed you before we have this folder for the Jenkins Meetup demo today and there's the demo built in here. And I also created pull requests to the, let's see where it is, here it is. I also added a pull request to the Innovation Day Jenkins repository. So the configuration is code. So as for this repository. And in this one, I simply added a simple multi-branch pipeline which is called build image. And it will pick up exactly this repository and build it. Martin luckily already merged it before, so approved it before so I can now go ahead and merge it. And as soon as this is done, I can go back to the Innovation Day Jenkins, click on the seed job, which reloads our job definitions. Simply need to build it. And within a few seconds, the build will start and it will reload our job definition. Should be rather fast, actually, let's watch it. Yes, you can see the build has been successful. So let's go back to the dashboard. Oh, and sorry, this is small again. And now in the Jenkins Meetup demo, you can see that we have a second build which is called build image. This one is already running as well, which is very nice for us. And if we open the blue ocean view, you can see that the build is now starting and it will build the Docker image and push it to Harbor. In the meantime, we can already go back to our initial repository, which was the demo build, where I showed you the Jenkins file, where we are just doing this verify Java home thingy. And what we will do here is we will change the part to the definition of the custom agent, because we don't want to use this base image anymore, but we want to use this newly created image we have. So let me change the project to EP and hopefully no, of course, I don't have it in my. Sorry for interrupting, so the audio quality drops sometime, the video quality. If you disable your camera, maybe it will be better for the stream. Yes, of course, I'm really sorry for that. No worries. It's actually more or less fine on the baby side, but when I check YouTube, the resolution goes down. Okay, is it better now? Let's see. Okay, let's continue and see if it improves. Yeah, so what I just did is I added the new image in the pod definition. And another thing I will do just to make sure is I will change the label here to make sure that we are really picking up a new pod and not an old one, but maybe floating around somewhere. So let's check if our Jenkins build has succeeded in the meantime. Yes, you can see it is done. It pushed our image. And we can now go ahead and use it by pushing to this repository again. Let's say use a 17 pod. Let's push this and trigger the other build again. I showed you and now hopefully everything should be green. And yeah, that's how we are utilizing our tools to work as efficiently as possible we can with Jenkins. Yeah, the build has been scheduled. And once it is here, it should be rather fast to complete because it's, there's not much it does. Yes, it is just creating the pod I guess. And we should be done in a few seconds. Yes, you can see now when it's printing Java home, it is printing JDK 17 and there's no error thrown because that's exactly the behavior we want. I will switch back to the slides now because that was the live demo. And you can see we already put quite a lot of work into our tooling and how everything comes together. But we are never done with this one. We always want to improve the way we are working with our tools and how developers can utilize our tools. And we always want them to being able to work even more productive. And that's why we have some outlook here with two steps we have on the very near roadmap. And the first one is we want to improve the deployment of our Jenkins instances by utilizing AgroCD and crossplane. This is something we are currently already working on. So it will be done very soon. And then this will enable us to automate even more things during your instance creation. As Martin told you, we need about 90 minutes for creating one Jenkins instance. And we want to drill down this time actually by a lot to work more productively. Yeah, that's it from our side. Thank you for listening so far. If you have any questions, we are more than happy to answer them. Thanks a lot for the presentation and demo. Yes, we got quite a lot of questions. So I'll start from the top. So one of the questions is about architecture. So Spassiano says that he guesses project initializer is doing all Jenkins controller setup for you. And did you also consider Jenkins operator or the solutions available in the Jenkins community? Okay, I will take this one. So sadly, none of us two were at the research for the initial approach, but we already created the first template like about two years ago. So it's quite the long time ago. And I had a look at the Jenkins operator when doing the meter preparation. Already heard about it before as well. And it looks really cool, but yeah, we now have our own setup with the project initializer, but it's a very cool project. And one thing to add here, we are not only using the project initializer for Jenkins, but also for lots of other things in the company like microservices and stuff. And we want to give our developers a very uniform tool they can just go to when they want to create a new project and then see what are my possibilities or can I maybe request or add a new template? Yeah, thank you. So another question is, do you have any plans to offer of this project initializer to the audience as open source or as a paid version? Yes, definitely. We have plans for this one. Unfortunately, we don't have a timeline for it, but we have discussed it a few times in the company. And we definitely want to offer it as open source, but unfortunately, we don't know when it will happen. Thank you. So for what it was, I noticed it contributes back to Jenkins in few areas. For example, we submitted some patches to open telemetry, plug-in and also to plug-in installation manager. So hopefully there will be more patches soon. Yeah, so there was a question about dynamically responding to new Jenkins agents and how you configure them. I guess we went through that thanks to project initializer, et cetera. Everything is being done as code for them too. And yeah, what other questions do we have? So Tim commented that the actually backstage has similar functionality as a project initializer. So maybe one of the options for next use cases would be to actually use backstage for management. Yeah, thanks for that, Hind. We are also aware of that. And we already have a team in the company taking a look at that. And we are also thinking about possibilities how to use these two together maybe. So yeah, nothing of our work is set in stone. We are always trying to improve our workflow and also use the newest tools, of course. Thank you. So another question, do you build artifacts such as binaries inside of your pipelines? Oh, how does it happen? So I guess the big question is whether you actually do continuous delivery for binaries of your products within the pipelines? Yeah, so we build in Jenkins, but currently there's nothing in the project initializer for it. So basically you have pipelines that already have predefined destinations for deployment, et cetera? Oh, how do you deploy them? Yes, exactly, currently it's like this. Yeah, it's just a static Jenkins file, let's call it like this and just there is everything defined there. Yeah, actually I had a question about the Jenkins pipeline libraries, et cetera. So do you also use pipeline libraries to unify standard operations for these jobs or how do you actually manage them? Yes, exactly. So for our whole deployment processes already set, we use a pipeline shared library and for the job configurations, we also have some shared code there. We have also our knowledge base where we share some code snippets there and there are also some plans in the future to even more automate this and to more generally use some job definitions. We started to do it already, but we are in the works there to get it more out to the people. Thank you. Yeah, Tim commented that he noticed he is not that you're not using the data trace integration of the open telemetry plugin for observability backend. So have you seen this integration before? Yes, and we already have some stories to use it actually. That's true. Yeah, and another question from Tim. Do you plan to open source your data trace dashboard maybe for someone who also uses Jenkins and data trace? Yes, for this one it is the same as the initializer. We have been talking about it. We decided that we want to do it, but we don't know when it will happen unfortunately, but it's definitely on our world map. Thank you. We usually post such components and projects in the data trace open source repository. So if you're interested in various integrations or configurations, you can find some samples there, but there are not that much things for Jenkins at the moment. So I think that we need to push for that where possible. Okay, so I think we ran out of questions. So if you want to ask something, please comment in the chat and ask questions. And meanwhile, I have a generic question I keep asking everyone participating. So basically what's your experience with Jenkins? What are the key obstacles you experienced when setting up these for your teams? Because of course it's very important to know what's not working. I will take this one. So since I've started working as a full-time software engineer, I think I always worked with Jenkins on the EP side. And I think the most important part there and the hardest part actually is to do the knowledge sharing. And also to find the knowledge gaps there because for you as working in engineering productivity, you know about all the stuff, you know about the configurations, et cetera. And the hard stuff there is just to bring it out to the people where we have done a lot in the past with some recorded onboarding and stuff like this and just in general to bring the knowledge out to the people. Yeah. Thank you. Katerina, what's your experiences with Jenkins and all this configuration management? Yeah, it matches very good with the things Martin said. For me, I'm not working with Jenkins yet. This long, I'm a little bit more on the tooling side like with the project initializer and so on. But yes, I would say that is a very big challenge. We also see it in the questions we get from people, but we're investing a lot of work in trying to make everything documented and so on. Yeah, there is a question from Sebastian. So about Jenkins challenges, have you actually considered any other Kubernetes native alternatives? Sorry, I didn't quite get that. So I guess the question, so when you run Jenkins and Kubernetes, one of the first questions people ask is whether you actually want to use Jenkins or maybe you want to use something else like Argo CD, Captain for some use cases, and Sebastian asked what is actually your approach? So whether you consider other tools? We are in the talks with Captain currently, and other than that, as far as I'm at Dynatris, we only use Jenkins for that. Mm-hmm. At my team, at least. I cannot talk for the whole company, but for us, it was Jenkins. Mm-hmm. But at the same time, as you mentioned during the presentation, for deploying Jenkins, actually you consider Argo CD in the future. Did I capture this correctly? Yes, exactly. Mm-hmm. Why not? Yeah, what we're achieving, or want to achieve with that is we really want to separate the CI and the CD part of the repositories. And yeah, that's why Jenkins and Argo CD seem like a very good match for us. Yes, it makes total sense. So, yeah. Thanks for these details. And yeah, you've done no other questions. You can say thanks to Martin and Katarina for this presentation. And thanks a lot to everyone who provided feedback about the video quality, et cetera. We are just testing the platform. So sometimes things do not work and thanks a lot for our patience. So hopefully the new platform will be completely rolled out in the community delivery condition within a few months. And yeah, then all projects, including Jenkins, Tecton, SpeedoCare, et cetera, will be using this backend. So hopefully it works next time. Okay, thanks again. Also a big thank you, sorry. I just wanted to say a big thank you from our side as well for having us today. It was a great opportunity. Yeah, I'm really glad that we could share some knowledge there and just to be here. Yeah, thank you for the opportunity. Thank you too. So I will publish the video soon. I will also start a thread on the community Jenkins.io. So we have a discourse channel and I will put it there. So everyone is welcome to join the discussion and share the feedback. So thanks again, Martin. Martin, thanks again, Katarina and everyone who is working on this system. So see you at the next Jenkins online meetup. You don't know when, but hopefully soon because we have four Google summer code projects started. So soon there will be more topics to share and don't forget that in one week we will also have CDCon and there will be a lot of talks about Jenkins to your Tierschelio. So if you're interested, please join. And yeah, meet the community there. There will be also Jenkins community day, but I'm not sure whether it will be fully broadcasted but we will post the news as soon as we know. So thanks everyone and I'll stop the broadcast. Thank you, bye.