 Hello, welcome to the Jenkins online webinar. Today is August 6th, and we have a presentation by Olivier about how Jenkins builds and delivers Jenkins in the cloud. It's your first time at the Jenkins online meetup. Basically, it's a event organized by Jenkins contributors, where we focus on presenting different things about what happens in the project and in the community. It's new features, developer tools, also ongoing development and preview of things you may see soon as users. The main goal is to actually show something, talk, and we encourage you to participate in the discussion. So you can use Zoom Q&A. After the call, we will also have a chat, so we invite everyone to stay. And yeah, the goal, again, anything about Jenkins. We also run a series of meetups about Jenkins and Kubernetes. And actually, today's meetup is also about Jenkins and Kubernetes, though it's a bit specific because it's our own project's key study. So the release infrastructure Olivier will be presenting today around some Kubernetes. And there is a lot of various things to learn about. So we are prepared to such key study. In general, we're interested to have any talk about the integrations between Jenkins and Kubernetes, like plugins, features, health charts, tools like operator or integrations with other CNCF ecosystem projects. And if you want to present, let us know, because we're always looking for speakers. Regarding our current meetup schedule, so we have a meetup today. The next confirmed meetup is August 17th. It will be about GitHub apps, authentication, and checks API integration. We also plan to have JSOC project demos in the end of August. We may organize some additional meetups in the middle because we usually announce meetups with one week advance, and there might be more meetups in August. If you're interested to present, just let us know. If you want to do so, there is a page which describes how to do the process. We have an online open CFP process, so you can just prepare Google Docs, submit to the mailing list, and basically that's it. So speaking of the today's meetup, again, we encourage you to stay in the discussion and ask questions. So if you want to ask a question, please use Zoom Q&A. This is a button in your control panel. If you ask a question, you either answer it asynchronously or ask the speaker after the main part of the presentation. And again, you're welcome to stay until then, and then we will have open discussion. So basically, we will stop the recording and we can discuss any topics related to the presentation or to Jenkins after that. If you want to ask questions after the presentation, for example, if you're watching the video on YouTube, then please use chat, use information list, or use the feedback form which we will be sharing in the chat soon. OK, that's it from me. And again, thanks to everyone for participating in the meetup. And I would only be to do the main part of the presentation. Thanks, Alex. I'll share my screen now. I guess I can. Zoom is doing some interesting thing here. Interestingly enough, it doesn't allow me to share a window. It only allows me to share a screen. Oh, this stuff. So you're going to share your screen, right? I can't share the full screen, but I cannot share one specific window. It's unsupported and Zoom. OK, then let's do it in a different way. Sorry for that. I'm just reorganizing my window so I can just share a test of one. Can you see my screen? Yes, we can. So if there is anything wrong with the slides, feel free to tell me. So hi, hi, hi, everybody. Thanks for attending this session about how the Way to Jenkins project is releasing Jenkins. The Jenkins infrastructure officer for the Jenkins project and I did this project this evening. So the presentation is split into six parts. The first part I'm going to talk about what the Jenkins infrastructure project is. Then I'll clarify a few things and then we'll go into the detail more specifically about what the infrastructure looks like for the release environment, how we configure the release environment, what the process is to trigger a release, and then we'll conclude this presentation. So first, it's important for you if you're not necessarily familiar with what the Jenkins infrastructure project is. Basically, we are a sub-Jenkins project. So the Jenkins project, like any organization rely on infrastructure and provide services like the main website, the ticketing system, and a lot of different things. And so in this case, we are also working on the release. So the people contributing. So it's an open infrastructure project. So we really invite a lot of different contributors to participate. As you can see in those slides, people are spread on multiple time zones in multiple locations, which means that it forces us to put in place some practices, like having asynchronous communication and also if you rely on testing and stuff like that. There is no really rules about what it's in the Jenkins infrastructure project. But what I would say is there is an organization on GitHub called Jenkins infrastructure where we can find a lot of different repositories that describe multiple things. Like the main Jenkins.io website, the PEPET code that we use to manage infrastructure, a specific configuration for Kubernetes clusters that we manage and so on. So if you are looking for something in particular, that's usually the best place to go. And otherwise, feel free to just ask on RSC. You should find all the information on the Jenkins.io website. Before we continue, I would like just to clarify a few things. So those are not like real knowledge, but just to be sure that we talk about the same thing. And the first one is during the presentation, I'll be using the term Jenkins master. I know that we are changing this and we use a different one. The main thing is we don't have validated the results about using a different name. So I've been thinking about should we just use a different one? Should I use the one in the results? Whatever, and I just prefer for today to just stick to the Jenkins master name so we all understand what it means. Another thing is during the presentation, I'm going to talk about different Jenkins instances. And the Jenkins project managed a few of them for multiple reasons. So we have if we go from the left, we have trust.ci. So that Jenkins instance is running in a private network. Only available to few maintainers. And we use it to build the main Jenkins website, update centers, or any other job that we just want to be sure that only a group of people can access it. This machine is running on a virtual machine. Then we have cert.ci. This instance is used by the security team for security work. We probably didn't mention too much about that one. We then have infrared.ci, which is the one that I will talk during this presentation. It's mainly used to manage the Kubernetes cluster and to run some tasks for the infrastructure on a Kubernetes cluster. We have in the middle, ci.jnkings.io. I have ci.jnkings.io, which is the way we usually mention it in the main list. This one is the major and the biggest instance that we have. We use it to test the core. We use it to test plugins, library, whatever. This is probably the one that you already saw if you contributed somehow to the Jenkins project. And finally, the biggest, the one that I will talk mainly today is released at ci.jnkings.io. So this is the Jenkins instance used for the releases. While all the people, while it's quite, I mean, as soon as you have a service instance running in a VPN, so if you have access to the VPN, you can see the job results. But only a few people can trigger a job there. So both release.ci and info.ci are running and configured on the Kubernetes cluster. Another thing that I would like to clarify here is about the release date. So before I go into the different components, I just want to say that you may have seen previously that we now manage all the release from the new environments. So the previous date was we needed because, okay, so the founder of the project to trigger a new release, which obviously puts strong dependencies on him and his machines and his house. It was quite hard to contribute, even if the script used for the release were public, we had no access to the machine. So we couldn't help him to debug and to release, I mean, to really understand what's happening in his machines. And also it's why, I mean, at least a good thing is it was like security by obscurity because we had no access to the machine. We assumed that it was working correctly. And so basically we created a GERAT ticket on the R-ticketing system, so infra-910. And the idea was to do all the work needed to move from a machine running in a basement to release Jenkins to cloud services where Jenkins community had the control on the release environment and could trigger a new release if it was needed. So that's basically how the work here and it involved obviously not only configuring Jenkins, but providing the infrastructure, providing the service and the process to do the full release, which bring benefits like everybody can audit because we try to be as open as possible. Everybody can contribute to all the parts, but we wanted to be sure that only a few people could trigger release, but obviously because we moved from a machine running in the basement to the clouds, it was also harder to secure. So this is a really big overview of the different components and I will now go into the detail for each of them. Sorry. So now let's talk about, so now let's go a little bit about the infrastructure. So basically as everybody in the everywhere under this presentation, you are going to find the feelings. So each time I specify the Jenkins organization and the Git repository. So in this case, all the things related to the infrastructure is located in the Jenkins and fresh Azure repository. Basically we use Terraform to describe resources running on Azure and everything was designed to use Kubernetes back again. So basically, sorry. So for our infrastructure, we need few resources. As I said, everything is defined using Terraform code. So you may see Kubernetes cluster with Windows auto-scaling nodes, Linux auto-scaling nodes. We define network configuration. We have DNS, most of the things there is managed in Azure. We also rely on Azure Key Vault to store the code sending certificate that you need in the process. We use, we started to store the secrets that we used to decrypt our secrets and so on. So there's not really a rocket science there. It's mainly we need a resource, we describe it. Once we commit the code to the Git repository, it's tested on Cialogenkeys.io. So we have LinkedIn tests, validates and text. And then what we used to do is once the PR was validated, we merged it on Tresor.ci and it was deployed from there. The reality is today is that for cost reason, we had to stop building the PR, the infrastructure for that PR, but basically what we're doing is each time we created a new PR, we were deploying everything from scratch. And so we could test that we were able to upgrade from a point A to a point B or we were able to provision everything from scratch. And obviously it was costly. And then we also had to disable that job from Tresor.ci because we lost faith in our codes, which is, I mean, I'm sad to say that, but that's a good state. And basically I learned a few things, working on that part is first terraform testing, terraform is quite difficult. It asks you to constantly test your terraform codes. Terraform relies on third API that evolve, we deploy services that evolve on cloud vendors. So it means that we have to constantly test it to be sure that it still work. Testing your terraform code, the best way is to provision resources, but at the same time it's quite expensive. And finally, automating terraform apply can be risky because if you have states in your resources, you don't want to take the risk to delete accidentally a resource. So you want to each time review what are changing. And so that's one of the things that I learned working on this. Some resources could be automated and others not. And finally, there are tools that were created to solve these issues. So if I have to work on this and otherwise, if I still have the time, there are a few interesting tools that you should look after, which is ConfTest, a terra test of TFSEC. I only saw some demos there. I haven't had the time to really deep dive into those tools, but this is definitely something that we miss in order to enable the way we manage our infrastructure. Another thing is more specifically for the communities cluster, the AK and the AKS that we are using. Something that I learned more than like the hardware is you have a lot of feature that need to be configured at creation time. And so we really have to be sure in advance what you're going to need. And otherwise, you just end up by recreating the cluster again and again. So at least the good thing is you test your terraform codes, but on the other side, yeah, just not using the best way of time to test your infrastructure. Otherwise, again, as I said, it's not really rocket science, but if you are interested to really go more deeper in the terraform code here that we are using, everything is located on the GitHub history. So just usually the loop to contribute or to understand how to participate in the Jenkins Infra project is first by discussing. So we use Tira, Madingly, it's hard to see. The first step is first to identify if there is a need and if there is one, you're probably the best person to implement the change needed here. Once you did that implementation, you test that on CIDO Jenkins.io. If the test are green and someone can review the PR, then we merge the PR, we deploy, we validate it, and then we improve and the loop continue again and again and again. So the good thing is working on infrastructure. We usually have to work another first time and then you can just reuse the work that you did. Then once you have your infrastructure in place, you have to configure and deploy services running in there. So either you rely on cloud services or you maintain yourself. In our case, we have both because we are using communities, we even rely on the current and how to deploy our application. And obviously we have Jenkins and a few other services. So if you really need to participate to the release environment, one of the first thing that we had to deploy was a VPN because Jenkins didn't have the VPN. And if we wanted to move from a machine running to a basement, we had to be sure that only a group of people were able to access our services, our trusted services, sorry. We also use HeldUp to define group and manage access. We have Jenkins instance called release.ci. We publish our artifacts to our Maven repository called repo.jnk.jnc.org. And finally, all packages can be downloaded from package.jnk.io. Those are public services except it's not all except that release.ci, but still. And we have two main, three main key repository involved here. So we have the first one, which is about open VPN that lets you, that allows us to configure the open VPN Docker image, but also to configure it and to specify who can access the VPN access. So we both use HeldUp authentication for the VPN and certificates. So someone need to trust, to assign your certificates. So you could connect on the VPN. We have the Jenkins infrastructure chart repository where we define all the stuff related to our communities cluster, how we configure Jenkins, how we configure Nexus for some, I mean, for local Maven repository cache, how we test, I mean, we have, yeah, we have a bunch of services running on the communities cluster. And we also use a puppet to manage the VPN machine. The main reason is because the VPN machine need to have access to different networks, private networks. And I wanted to be sure that we had network interface in different location, but it was just easier to do it on a virtual machine. If, because I think at least in my case, I'm more interested to share about the communities and how we manage this. I would just highlight the different tools used to deploy and configure release that's the ID of Jenkins.io. So we mainly use Helm as the package manager for communities. So communities is just a way. So if you're not familiar with that, it's just a way to describe resources. And so you just say, okay, you need those resources and communities will convert that to real resources. So you just provide them with files and communities will create the volume, the network interface, all the things that it needs for you to deploy your application. And a Helm package is just a way to describe all the resources for a specific application like you have a Helm package for Jenkins. Then you have Helm file. Helm file is just a way, just a tool that allows you to deploy and manage your Helm application on your cluster. So you just run a Helm file apply and then it will ensure that the right configuration is used with the secrets and so on. We use SOAPS. SOAPS is a small tool to encrypt and decrypt the secrets and I'll talk a little bit about that in a few slides. And then we use another tool that I wrote which is the CLI to automatically update files based on some strategies. So when you look at the deployment, so the Jenkins and Frasier charts, you have a lot of different directories there. So I will not show you a specific configuration because I think that if you are interested by that specific session, it's just better for you to go on that kitchen repository and look and just learn the way we are doing this because we have a lot of codes, but just give you some guidelines about what to find where. So the short directory contains custom charts or custom charts. The Helm file directory contains Helm file configuration files. So basically we have one per application and that's where we define. We need that time application deploy in that namespace with that configuration using those secrets, basically. The cluster directory contains the definition of our main cluster. At the moment when we have one, I mean it depends sometimes we have to, we probably deploy more cluster, but it depends if we have the time to work on that. But for now at least we have one. We define configuration secrets. We have a Jenkins file that work on communities. So we just define the different stages that we apply when we want to deploy our application. We have the pod templates. Also I'll talk a little bit about that in few slides as well, but just to give you a big overview, the pod templates define the environment where you are going to render Jenkins file. So if you need a container with Helm, that's a big place where you define that. And finally the update CLI is the directory where you specify all the strategies that we want to apply to update files. I'll talk that more specifically in one slide. So if we go back to the secrets, we are using SOAPS. SOAPS is a tool from Mozilla. It allows you to store your secrets in a Git repository, which makes it quite easy to audit. You can easily roll back a version. So you can configure secrets, roll back to the previous version and so on. You don't have to maintain a third service. You can have different keys per file. So I find it's really useful in the case of the Jenkins project where different people have access to different services. And so I can easily delegate the permission, the management of, let's say the release environment to a sub set of people while other maintainers have access to the application. And finally, we can download multiple ways to decrypt and re-encrypt the secrets there. So in the case of the release project, we have an Azure, we use Azure Key Vaults to decrypt those files. And the Azure Key Vault is inaccessible from a specific private network. We use the GPG key for our contributors. So if you are granted access to that Git repository, we probably basically need a GPG key. At least in the case of the secrets, it has been working quite well. We try to not put everything in Git repository. So on purpose, we decided to just put the code-signing certificate elsewhere and the same for the private GPG key. So those are stored on Azure Key Vaults with limited access, but then we need the Git repository to access those. So yeah, here it's just to not put everything in one location. Another tool that we are using, which is something that I'm using to apply strategies. So in this case, what we have is, we just define a source. What's the rule to update a file? So in this case, we just say we want to use, we want to specify the Docker digest. So each time the Docker, so we fetch on the Docker registry for the image check-in slash check-ins kind of a test dash to the K11. And we fetch the digest. And then we check with the target. So the file located here, we check inside the key Jenkins.master.image tag. And if the digest does not match the one that we retrieve, we just change the file, commit the change, open a PR, and so someone can review the change and just we move forward. There are different ways. I mean, we are supporting different sources. We can fetch version for the Docker digest. We can fetch version from GitHub releases. I mean, that's, yeah, it's just a small CLI. So just, yeah, define different, different condition. And so the second example in this case is when we want to update the weekly releases, we just fetch a version from the Mavin repository. We fetch a version. We test that there is Docker image that exists with that specific version. And if it's the case, we just update the target. So yeah, that's what we use to continuously either update the workshop version, the pod templates, the version, the image use the pod templates or any other YAML configuration. So just, this is what the job looks like when we deploy and configure the communities cluster. So it's trigger every 30 minutes. And so we need the secrets. We just check if we can update the files. If we can, we just update them. And so we do the GitHub and some tests on the files. And then we just run the ham file apply. And so if it's needed ham file, we'll ensure that the activities cluster has the latest version. And these run every 30 minutes and it's been working quite good for us. Something specific to our environment is the way we manage the Jenkins master. In this case, it involves three things. So obviously, ham charts, plugins and the gcast configuration. For the ham charts, we made custom Jenkins charts, but basically most of we just have a strong requirement on the upstream one. So we just try to reuse as much as possible from the stable slash Jenkins ham charts, but we just override default values for our environment. And we also add a specific communities resources in our case. We have a third ham chart called ham dash Jenkins dash release. But it's getting less and less important over the time because the way to configure Jenkins on communities evolve. And so this is something that we try to do as well. So now most of the configuration is done inside the gcast. So that's why that ham chart is becoming less important. So the two most important plugin in our configuration is obviously the communities plugin because it allows us to run our load on communities and the configuration as code, which is a central piece of our infrastructure and really allow more contributor to really understand what you are doing, how we are doing it. And if someone identifies something that can be improved, they just open a PR and it will be applied. Next time our job is running. So inside the gcast you can find the cloud configuration for communities. In our case, we specifically override the default GNP pods because we have Linux and Windows containers. And we want to be sure that those are running on specific nodes. So we just specify node affinity that need to be used for every container in our cluster. And that's it. We have the configuration, the credentials for the Jenkins master. And then we define the job this. We judge the job using job this. So we have on the release environments mainly, the core and the remote team components. I'll give you a demo at the end of the presentation if I have the time. Regarding the way we manage the Jenkins agents, something important to understand when you use Jenkins agents on communities is that the environment is defined using a bot template. So a bot template on Kubernetes, it's a resource where you just say, my bot templates contain those containers. You can have one or you can have several containers. The way the Jenkins agents work, you always have the GDNP container which establish the connection between the agent and the master. And then you had as many containers as you need, depending on what you want to use in your Jenkins file. So you have different ways to specify that bot template. So either it's defined inside the global Jenkins configuration. Personally, I prefer to not put any configuration there except it's the GDNP containers because if for some reason we have to configure the GDNP connector in a specific way for the release environment, like for the node affinity, it's defined at the global configuration. Otherwise, everything is defined either in a shell libraries and in the case of the release projects, everything is defined close to the Jenkins file. And close to the Jenkins file, you can either inject snippets of your YAML bot templates or you can reference a bot templates next to the Jenkins file which is definitely the one that I prefer personally because it allows you to run LinkedIn tests on your bot templates and also use any tools that manipulate YAML. So that's why you always see in the GDNP repository that we are running in communities, you always see a Jenkins file and a bot templates.YAML are multiple bot templates if it makes sense. Regarding containers, something important that you also have to keep in mind is most of the time if you use containers that you haven't built yourself, they will probably not use the user ID, the UID 1000. It's usually better to use the same UID than the Jenkins agents. It's also usually a good idea to override the default commands because we want to be sure that the container is not stopped after a few seconds. Like say, if you use a container for just the CLI inside, you will probably have to override the command so that you can sleep for, let's say 99 days or whatever you need there. And finally, I try, I mean, I really like because this is something that's easily to manage on communities, I really like to use persistent volumes when it makes sense. So again, it's really dependent on the situation but in the case of the release environment, we have a specific website that allows you to download that allows you to download the file from a location and that website use the same volume, a user volume that is also mounted in the CLI environment. So we just have to copy the move of file into the right directory and it's immediately available from to all the people that rely on that specific website. And finally, one of the most important thing to keep in mind when you work with Jenkins agent is to use Bud Limits. Bud Limits allows you to have auto scaling cluster but on the other side can be really frustrating because if the limit is too low, you are going to face really weird issues. And on the other side, if the limit is too high, your pod will never be scheduled on any nodes or we just create impression nodes again and again and again. So yeah, and as soon as you start using Bud Limits, it's just better because you're going to face interesting issues there. So on a global basis, if I can let communities manage a resource or instead of Jenkins, I just try to let communities manage it. So what I wish I knew before working on this song, just keep in mind that per system volume, if you let communities manage them for you, may not really be persistent because if the communities die or whatever, just lose the volume there. So if you really need persistency, then it's better to manage communities out of the volume outside communities, let's say using Terraform. Another really great tool that I'm a really big fan is Set Manager because Set Manager allows you to generate the SSR certificates. And when you work with a lot of private services that do not have public access, it's just a game changing because you don't have to connect in a service and just accept an SSR certificate. Another point important is regarding the Git storing your security in Git. Just be ready to rotate them. I mean, it happened, but just pushing your security in an encrypted way is just really the best way to lose a lot of time and it can be quite easy to do. So, and I mean, I heard mysteries about people who just accidentally pushed the secrets, not encrypted, not encrypted. So just be ready that if you put your secret in a Git repository, you are able to change the secret and rapidly because yeah, it happens. Personally, in my case, I remember one time I pushed all the secrets and then when I just reviewed the PR, I realized that maybe my files were not encrypted. And so while it took me a few minutes to change the secret, to verify that again, it was not possible to use. It took me days to verify that nobody were able to abuse of that. And also what I found interesting is that it's just a few minutes for third vendors to the date that I pushed the secret publicly. So I received a lot of emails from companies saying, you should use your tool, you shouldn't basically push GitHub token publicly. So I tried to put in place few things to mitigate that. So first now, secrets are not pushed to public repositories. So there is a private repository where only few people have access to that. And also I always try to double check that I don't have any secrets leaking somewhere. But yeah, still, it happened. But I mean, in my case, the good thing is nobody was able to have use of that. But yeah, just lose days on that. Community auto scaling feature is really powerful. But again, because for that you need pod limits, you're going to face really weird issues. And so sooner you start to work with pod limits and auto scaling, better it is because I remember in my case, I first work without the pod limits and then I enabled the pod limits and I just lost quite, I mean, I just lost days. Just understand why my job was running and not anymore and what was that wrong issues about that file that was not able to be written and so on. So yeah, sometimes it's really weird. And finally, to debug pod limit issues, I think some monitoring in place that allows you to really understand how you are using containers and what's happening inside your containers on the Jenkins master, but at the same time on the Jenkins agents and easily correlate those information together, make it really easy to realize that you have a pod limit issue there and it's not something else. And in my case, yeah, primitive graphonite it was really easy to deploy. So yeah, good surprise during this process. How to contribute to this part again, you have few links across the slides. You have the Jenkins and from proprietary repository. You have the Jenkins and for a charge repository for the community's code. You have open VPN. And yeah, you should be able to find all the information by yourself if you are more interested about this specifically. And now let's quickly look at what, yeah, I don't have a lot of time anymore but I'll try to speed up a little bit. Just look at the release process what it looked like today. In this case, we're involving a few more tools Bash Maven, Python, Makefile. Before we continue, I would like to clarify that the Jenkins project has multiple release types. We have the weekly releases. So those are generated based on the master bench on the Jenkins code repository. It's done every week and it's triggered by a cron job. We have a test releases. This one is more carefully selected and we tend to freeze the code but also the scripts used to generate the release is one happened once a month-ish but yeah, there is no strict rules there. And then we have the security weekly and the security tests. The main difference compared to the weekly test is in this case, it involves security work and most of the work is done in private until the last moment where we publish everything. So obviously it increases the complexity of the release environment for that because we need promotion mechanism. So the release process is split into two different jobs. The first one is the Jenkins release. So the job is divided into categories, the Jenkins release where we fetch the code and we published a result on the Maven repository. So basically we start here. So we have the release code is defined on the Jenkinsiflash release. We define the Jenkins file that defined the different stages, the pod templates, what the environment looks like to run the Jenkins file and a few scripts. Then we fetch the code from Jenkinsiflash Jenkins. We published the artifacts to the GitHub repository and to Maven. And then we trigger a second job which is Jenkinsiflash release. Then we trigger to Jenkinsiflash release and again we have specific Jenkins file for the packaging. We have specific button plates for Linux and Windows packaging, a bunch of scripts and then we fetch the packaging script from the Jenkins CR packaging and then we publish the results on package.jnkns.io. The scripting involved here are multiple and it evolved over time that I worked on it. So obviously I started working in the Jenkins file. Then realized that I had a lot of complexity there and it was not this easy to test that. So I started using make files. Make files is really, that's a tool that I really love because it helps you to abstract common complexity. So let's say instead of having really long command the curve build, there's a lot of parameters. You just run make build and it's easy to read and it's easy to understand that in your case make build means that you're building a Docker image but the same that happened with the Jenkins file after a while you start to have quite a lot of complexity in the make file which become quite hard to read. And so that was the moment where we started at working on the bash scripts. So that bash script became the central piece of the release environments. And so I could use the same script from my Jenkins file but also from my machine. So I could test exactly the same thing. And then the bash scripts had most of the complexity obviously but it also became the place where I defined all the environment variables and the default values. The main reason is because it can be easy to override the value of an environment variable. So I prefer to define everything here and there is a convention to not define variable elsewhere unless it's really needed. Then I started looking for specific scripts. So I wrote Python script when I had to manipulate YAML, JSON, XML APIs. But still I prefer to keep the Python script as small as possible and always call them from the bash scripts. And obviously we have the Maven release plugin for this reason. So the release workflow looks like this. We check out the Git repository that we want to build. We plan that. So we just visualize what we are going to do. So with Git repository we check out where we will push the artifacts. We retrieve the codes and we certificate from Azure Key Vault. We grab the same for the GPG key from Azure Key Vault. Then we run Maven release prepare. We commit from the Jenkins file. So we don't use the Maven release prepare to commit because we want to be able to use different Git repository than the one defined in the pub.xml for those who understand what the Maven release plugin does. We stage the release and then we verify the GPG signature and the code sending certificates. Again, everything is available on the Git repositories. And for the packaging workflow, it's kind of the same. We fetch the GPG key to code sending certificates. We download the Jenkins.war from the Maven repository that we find in the previous job. And then we build and publish packages for each distribution. If we need, we do promotion and then we force a synchronization of the warehouse. What I wish I knew before working on this part. First, Maven release plugin seems to be black magic to me in the past and still black magic to me. Second thing is I'm trying to always use the right tool. In my case, I try to not over complicate something specific. So I like to use bash, I like to use Python. I like to use Jenkins file in some cases. And I always try to evaluate if I'm still doing the right thing. So that's why I put complexity in the Jenkins file, then move that complexity to the main file, then realize that it was better to maintain that in a bash script and so on. Try to always define an environment variable location strategy. So basically try to define your environment valuable in one location. In my case, I decided that the source of truth would be the bash script. Otherwise it's easy to just override the value into multiple location and it can be quite hard to review how you're doing. So and it's really important to decide where you put those environment variables because it will affect all the scripts you use elsewhere. And finally try to keep in mind what your environment looks like when you work on scripting because it doesn't make sense to write a Python script if your environment will not have Python and on the other side, it may make sense to add Python environment to your site. But again, it really depends on the situation. What I propose to do here is to share different things in this case. I can share with you what the release environment looks like. So we have two directory, one for the components, one for the core. When we go into the core, I hope you can read a plan. So yep, I'm connected. I went into the core directory. So there we have two directories for stable and for the weekly. And otherwise we have the core packaging. So we have a generic job for the core packaging and the same for the release. So if we go to the weekly, basically that thing is just, so we can build from the master branch. Basically what it does, it just triggered the generic packaging job. By the generic release job and the generic package. And so if we go to sit here, we'll see that we trigger the branch on the number 28. And so from here, let's open the ocean because I prefer visualization there. You can see that we do the different stages. And so in the case of a security release, the idea is kind of the same, except that this time branch, in the case of security or LTS release because we create a branch where we freeze the parameters. So in this case for the LTS, we can just go there. We just triggered this. We just triggered the job. And then we are shown a bunch of parameters. So in this case, we want to build a stable release. We can provide a few parameters. And yeah, once we've pushed the button, the release is sent. And regarding, I'll just go back to my presentation. Yes. So just to conclude this presentation, it's only the beginning of the release. And now that the community has the control of the release process, it's our role to make sure that it's maintainable, that we can regularly update our environments. And we are still looking for people who can take the ownership on the different components. So we have to maintain the infrastructure and that are from codes used for the infrastructure. We have to be sure that the way we configure our services are reliable and people can contribute to that. And finally, we have to be sure that we control the process and how we do the release. Thank you. Thank you, Olivier. Again, if you have any questions, please ask them in ZoomConnay. Yeah, one question we received is, for scalability, is it possible to have Jenkins running compromise and also run build engines in the cloud? So it's not directly related, but yeah. So what's the question? So if you run Jenkins on premise. Yeah, I'm putting engines to the cloud. And Mark, wait here, I do it all the time. So I run on premise and use cloud agents because I don't wanna host all the infrastructure on my local environment. So yeah, it works great. Now. Yeah, it's... Yeah, so actually it would be interesting to take a look at delivery pipelines but from the code side. So if you could briefly present how it looks like the recent infrastructure pipelines and ecosystem from the GitHub, what of you? I think it would be a great addition. So I'm not sure to understand what you mean by the delivery. So under how we published on which delivery pipeline? How basically the same pipelines as you presented and how we deliver them to the instance? So just to be sure that I understand correctly. So for example, if you go to the Jenkins in fresh release. So if we go, for example, for the packaging job we have here. Yeah, on the screen sharing, by the way. Sorry, oh, no, sorry. Can you see my screen? So basically, for example, in the case of the packaging we define in the Jenkins file that we want to use the Kubernetes cluster. So obviously the communities can be running wherever you want. It's just that by default is easier if your Jenkins is running under the same Kubernetes cluster because by default it has access but otherwise you can provide the configuration that you want. And then you specify a pod templates. In this case, YAML file is a pod templates definition. And then inside your Jenkins file you can say for example, from time to time. So by default here we are on Linux but for example, we have a job where we need to be running on Windows. And in this case we just say use the pod template definition provided on package dash windows. And if we look at what that pod templates looks like here is another directory pod templates. And so for example, if you look at package Linux we just define which image we want to use what are the limits and some. Does that answer the question? Thank you. Okay. So since we don't have more questions in the queue I suggest that we just stop the recording and continue with the live discussion. And thanks again to all of you for the presentation. We shared the feedback link in the chat and again we will appreciate your feedback so that we can make our next meetups better and provide more relevant information. Next meetup you like to happen in two weeks and see you there and so I'll stop the recording.