 My name is Vibhav. I work at Red Hat and the OpenShift team. So in OpenShift, me, along with my teammates, we maintain the Jenkins ecosystem of plugins and also maintain the Jenkins image for OpenShift. So this is something that we give to our customers so that they can use Jenkins alongside with OpenShift. So that's what me. Let's get started with the topic at hand itself. So today's topic is Jenkins operator on OpenShift. And this talk is, in a way, a sequel to the previous talk on Jenkins operator done by Thomas. Thomas says, in which he covered in depth the Jenkins operator, how it works, the CRDs, and everything. This talk will cover what all stuff comes after that and what are the new things in the roadmap that we have discussed and also how this operator integrates with OpenShift and how this is helpful in the OpenShift environment. So let's start with a little bit of a story first. Imagine that, like, I think everyone here knows what Jenkins is. So in Jenkins, we have pipelines and we can create a lot of pipelines and we can spin up. Let me go into present mode. So we can spin up a lot of pipelines. And once these pipelines are spun up, we can configure them based on how we want. We can have them periodically or spin up whenever we want. And we can also, with all this stuff that it does, we also need to do a lot of housekeeping with it. We need to see to it that there is a, there is a backup and restore happening and certain kind of day to level jobs that we need to do. Usually these jobs are done by the administrators who are proficient in Jenkins and who knows Jenkins in and out. And they are the ones who usually do these tasks. Now, in an event, if the admin is not there or there is a requirement which needs to be fulfilled for these day to day to level tasks to be done on a continuous basis, what do we do then? So let's just understand who this admin is. This admin is someone who is operating Jenkins and he knows how to operate Jenkins. And that is what the Jenkins operator for Kubernetes is basically in the Kubernetes environment, where everything is based off of pods and containers and Jenkins runs in pods. It is possible for us to create something that can actually manage those Jenkins instances and manage the Jenkins instances themselves and do the hand holding for the backup and recovery and configuration and, you know, do these jobs periodically by themselves. So that is the idea of the operator. Now, if and how does this operator achieve this? Now this operator is basically is a piece of code which kind of helps in patching all these things together and making the entire workflow happen. For example, if I need to backup and restore, there needs to be some code written that does the backup and restore for Jenkins in a specific way. So for example, I need to backup all the plugins that are being used in wirelib Jenkins plugins. It will take that all stuff out and then it will do that stuff by itself. Now that is just the Jenkins operator. Now, if you want more in-depth understanding on what operators are, Avni Sharma has done a brilliant talk on operators 101, which kind of goes through the basics of these operators and kind of covers them. But in this talk, we will talk about where have we come from the last time and what do we look, what we as open shift engineers are trying to bring to the table. So let's understand how the Jenkins operator on open shift came along. So if you look at this, so the Jenkins operator on open shift is not an entirely new concept because Jenkins has been provided to open shift users for some time, the open source version of it. Here you can see a template which is created for Jenkins. It creates certain resources which are necessary for configuration of Jenkins and these resources are created including the image. So the image also here, these are created so that open shift specific processes are possible to be done through Jenkins. Now what do I mean by that? So in Jenkins, so as you saw the template over there in Jenkins, why do we even have Jenkins in open shift? So basically when we need to do entire build pipelines in Kubernetes itself, Jenkins is one tool which has the level of extensibility that no other tool has. So the Jenkins has a Kubernetes plugin which we have used to create the OpenShift sync and OpenShift client plugins. These plugins integrate with Jenkins and OpenShift and help to create pipelines. Now if I, so let me, I'll show you an example of a build config and build in a while where I'm showing the operator. So but that's just to kind of get an understanding how Jenkins integrates with OpenShift is through builds and build configs where a build config is basically the configuration of a Jenkins pipelines of the build config is configured with a Jenkins strategy. Now we also provide some extra images which are basically port templates. These images which are used in these port templates for Kubernetes plugins, these images are for Node.js and Maven and they can be used as like a base for doing the builds and these basically act like agents. Users can go ahead and create their own agents based on the base that we give in the Jenkins repo. So the Jenkins repo is situated in, over here, GitHub slash OpenShift Jenkins. Users can go here and they can go to the, see these are the agent images that we give. So these are the ones and they are used in the port templates. So after that, let's talk about why we are moving to the Jenkins operator. Now considering now based on what I've said you might have got an understanding of Jenkins is already there in OpenShift but why the operator. Now with the, with Tecton coming through and the deprecation of Jenkins on OpenShift doesn't mean that people will stop using Jenkins. A lot of people will continue using Jenkins because they have a lot of pipelines that they would like to, that they've had from a long time and they would like to either migrate into a new Jenkins solution or just keep things as they are. So for that, we are, we are helping out with the Jenkins operator and trying to create a more enterprise level solution with the maintainer of the Jenkins operator Thomas and we are working together to have a more enterprise ready Jenkins operator. So the deprecation and the need for new solution has brought us to the Jenkins operator on OpenShift and apart from that also like we also, we need to manage Jenkins better, manage Jenkins and do a lot of other things we couldn't do before like have immutable, immutable environments which can be replicated so that the user doesn't, user doesn't see changes all the time when, when we update things like those. So that is, that is how the vision of OpenShift and Jenkins has turned towards the Jenkins operator. Let's move on to the, so in the demo, I will kind of go through, go through all this again. This is just to kind of get an idea right now. Let's, let's talk about the operator itself and what the operator does and look at some examples of the stuff. So in the last talk, if you were there for the Jenkins operator given by Thomas, like he mentioned, he, he went through the entire ecosystem of operator, operator framework and custom resource definitions and custom resources. So to kind of get a basic idea of what they are, custom resource definitions are like Kubernetes resources which, which you create. So if you had to create a Kubernetes resource, the custom resource definition would allow you to define one and the custom resource will be an instance of that custom resource definition and that resource will be backed by your operator, which is basically your Kubernetes controller. And this will be, and the logic that you put in the operator slash controller will be the one that will define how the operator reacts when it sees an instance of this. So as a simple example of a CRD of Jenkins itself, can you, can you see the terminal on the right? Should I increase the phone? Yeah, it would be great to make it a bit bigger. Okay, cool. So, okay, that's fine. So let me just put this. Okay, so CRD. So let, let's, let's get a glimpse of what a CRD looks like. So CRD is basically an API endpoint that we defined from before so that the operator can latch onto it and then listen on a request on that API endpoint. That's, that's how you can see it. So we can see that these ones have already been created by us. To create this, if you go to the Jenkins, Jenkins CI Kubernetes operator, if you go to the Jenkins CI Kubernetes operator repo, which is our upstream. So if you go here, you go here in deploy and you go into CRDs and you, you choose one of these CRD. This is the one. When we do a QCTL create-f on this, basically we turn this YAML into a resource. We turn this into a resource. This is, this is what is created over here. So this is what Jenkins dot Jenkins dot IAML is, is basically this. So let's go back. So this is the CRD and to, to have an example of a CR, let me just do something similar. So a CR is basically an instance of the CRD. And considering we are on OpenShift right now, we will look at an OpenShift specific CR because OpenShift, if you might not know, is a distribution of Kubernetes. That is not exactly distribution. It's like a distro of Kubernetes with, by Red Hat and it has some increased security policies and such because of which there are certain configurations that need to be kept in mind. So the image that will run with this, this configuration which you see here, Quay dot IO, OpenShift original Jenkins latest. And this is, this is readily available on Quay. This is not something anyone has to pay for anything. But this is readily available. And this is the image that we're using. And it is built from a, I'm sorry. Yeah. So yeah, it is built from the OpenShift Jenkins repo from here. This is where it's built from, from Jenkins slash two. And yeah, basically this is the image that we're using here. And this is the Jenkins. This is the environment configuration that we give for the Jenkins, which is necessary for it to run on OpenShift. And why do we do this? And how did we manage this before? So in this CR, which is, which gives all this configuration allows us to give the configuration for Jenkins. This, this we used to give it before in a form of a template. Now, if you know what a Helm chart is, OpenShift template is basically a Helm chart before charts Helm charts are cool. So this, this is something that in which you can define what all resources you want, the deployment config service, build, build config, config maps, secrets, anything you want, PVCs, PVs that you would want to create and the environment variables and the parameters that you would pass with the processing processing of the template, which will replace it's almost like a Helm chart. But so this is what we used to give before. But now that we are OpenShift is moving towards a more Kubernetes centric way of doing things. We are moving from this to the operator. And in the operator, we are kind of basically replicating this as a CR. Now, once that is done, let's see how the operator looks like and how it actually spins up Jenkins. So considering you've already got a look at the CR, let's go over here. I'm in the Jenkins console right now. I've already logged in. And I've already logged in. And I have, if you can notice over here, there is a, I have done, there is a watch command running on OC get pods, which is similar to QCTL get pods. And you can see that these two pods are already running. This pod is for the Jenkins operator, which I started way before. And the Jenkins example pod, which also was started way before because, because of this presentation. And the thing is the best way about the best thing about this is it's very easy to deploy an operator on OpenShift. All you need to do is go to the OpenShift console and then in the operators sidebar, you have to go to the operators drop down, go to operator hub and in operator hub, just search for Jenkins. Sorry for the dogs. I don't know who is the mom. So go to the operator hub, search for Jenkins. You can see that this is saying install in the project Jenkins operator test project means namespace in OpenShift. So go here and you can see that it's already installed. I'm not going to install it because it takes times of time. So you can see all the information for the operator operator it's given here. Currently, you can see that the operator is on 0.4.1 RC1. But actually in upstream we have only released to 0.4.0. So what we are doing here is we keep re-creating based on the latest code base, till we reach the release. Now why do we do this? We do this because at times, there are a few things like recently we didn't have any routes working. To get to those routes working, we had to actually do a few PRs and then once that got most, we thought it makes sense to do an RC1. But we'll actually only do a final release of this. Only when upstream does a release. So we are upstream first at any cost. Once the upstream is 0.4.1 release, we will do a 0.4.1 release only then. Till that one, it will always be like a RC1. So this is it. So I go here, install the operator, everything's installed. You see that the Jenkins example is operator is installed. And you can also see that the Jenkins example is installed. And if I go into the installed operators and go over here, I can actually just create a Jenkins instance from saying create instance here. But considering it's already running. So I'll just go ahead with playing around with this. So this Jenkins example that is created, there's a route created for it as well. A route is basically an endpoint for a service. It's like ingress. So this, if I give this over here, which I've already given, let me just refresh. So this is the Jenkins instance which has been created by the operator. Now let me spin up a pipeline, which uses OpenShift DSL. For this, I will be using the pipeline over here in the Node.jsx example. And this is the Jenkins file. This Jenkins file uses the OpenShift with Luster with Project and all the OpenShift related commands, which come from the DSL, which in turn comes from the Jenkins client plugin for OpenShift. So OpenShift client plugin. And this can be seen over here. So this is basically the plugin that makes this happen. So let me just start the pipeline. So basically when I create this, OpenShift sees that, okay, there is a Jenkins file. And then it creates a build config with Jenkins strategy. And every, and based on the configuration, once a build config is created, it creates an instance of that build config. It creates a build basically. So this is the build that is created, Jenkins pipeline. And this syncs with the example. You can see that this pod is created and this pod is nothing but the Jenkins agent. This is the agent pod that is created through the pod template in Kubernetes plugin. Omega, the mechanism through that. And then once this is created, let's go over here and see what's up. So we can see that something's happening. Magic. Most probably not tragic. So you can see that these, these builds are running. And I guess it failed. But basically, this build got started. It got synced and it was started over here. So the Jenkins, basically the build and build config were able to line up together with the Jenkins operator created Jenkins instance. And the OpenShift workflow for Jenkins using the operator is working properly. Now, let me go back to the, so we are done with this one. So let's, so that, that was the demo. Now, let me actually talk about the architecture and the roadmap. So what happened right now basically was, we created a Jenkins CR, which was, which was this already, which was created. And the Jenkins CR, when it is, when it is created, the Jenkins CR does a base reconciliation to set all the necessities for the Jenkins instance, such as the pod and the restrictions on the pod and the container spec and everything. It's almost like a pod spec stuff that is happening. Then in the user reconciliation, it installs all the groovy script, gas scripts, and it all the, it does the configuration on that level for Jenkins. And then it reconciles based on if there is any change in the Jenkins CR. Now, this architecture kind of can be linear in a way and is known to cause issues because of the different nuanced things it does. So currently, on the roadmap, there are a few things that we are looking at to make this better. Now, the, the, the first problem is that if you notice this, this is a pod. We didn't really create a deployment. So there is a pod created, but not a deployment created for the Jenkins. So this is the first, first hurdle that we have to go through. So this, we don't have a deployment for the Jenkins again, but this is the first order that we have to go through. And this is important because then things like Istio and all, which do, which do like injection through admission web books, mutation web books, they can, they can be used with Jenkins because recently we had a issue which, which was around this. Then after that, the other thing we are, the other things we are, we are planning for are the air gap environment. The air gap environment includes a Jenkins image controller. I'll just explain that to myself. So in the air gap environment, basically, the customer shouldn't have to, or the user shouldn't have to have access to internet, but they should be able to use their Jenkins as is. No need to install any plugins or like connect to the Jenkins or plugins repo and then try to download it because at times the repo, repo isn't even online and there are problems. So to overcome these things, like the first solution that we are working on, on our team, Akram is working on Akram is working on is with is the Jenkins image controller, which helps to create immutable Jenkins images with plugins pre-installed so that the user doesn't have to wait for the plugins to get downloaded every time the CR starts up, a new CR is created. This is the issue which is happening right now. Then the next one is have a local update center, which would be nice to have, but we still have to figure out how we would go about this. Then after this, we are thinking the another, another main thing that we are focusing on is the current user reconciliation. So as I said, the user reconciliation is the stuff that happens after the base reconciliation. Base reconciliation is like the operator basically sets a base for the Jenkins to be spun upon. So what do I mean by base? I mean all the config maps that need to be created, all the config maps that need to be created, all the port spec that needs to be instantiated and all that stuff. The user reconciliation is where the groovy scripts, the cast and restore and backup occurs. So our main issue or improvement that we need to do here is basically modularize the entire system. Otherwise it acts a lot like a monolith. So we need to, we are working on that. Another thing as you just saw, OpenShift support is what we are working on. So currently, as of now, this Jenkins image controller refactor Jenkins CR reconciliation or deployment instead of pod is actively being worked on. And the other stuff we will get to, but this is, this is on our roadmap for this year. I hope, I'm hoping that we kind of are able to come up with a good design and a good solution for the end users, which most probably we are in like, which would be enterprise level customers who don't want to move away from Jenkins and want to keep using Jenkins, but it would be nice to have some kind of overarching operator of this sort to manage things because in the new Kubernetes and Kubernetes like infrastructure age, like it's become like almost an necessity to have a Jenkins operator. So the last thing, branch pipeline support, this is something we will tackle once the basic stuff gets over. So this is, this is something we do at the once, once the basic stuff is done. So that was, that was it for my talk. This for, for any questions or feedback, you can just follow these and then you can come to the word of slabo is a slack where we can answer any questions you have for the operator. And we, we love the suggestions, keep them coming. And every Thursday we have a community call. It is, so if you want to join the community call or get more information on it, you can go over here, the Kubernetes operator repo Jenkins here, Kubernetes operator and go down over here, the very end, you can see a part called every Thursday we have a community call at 1630 CET on Google Meet. Check this out. There's a join us on the word of slap slack. Good to have you guys here. And thank you. Thank you for the presentation. We've got a few questions. And if you want to ask more questions, please use and let's start from there. So the first question was about open shift version. So what is the version of open shift you're running for the demo? I'm, I'm running a 4.4 for the demo. So currently last Thursday we had a GA for 4.5. So the thing is the operator hub, it will install the operator on any open shift version that is supported. So we just, so once the operator gets updated in the operator hub, it will, it would just have to be, you just have to click install and whatever version of open shift you're running, it will install, install that but the question is more about version specific open shift images for Jenkins operator. So for that, we are actually trying to figure out a process to like keep, keep it coming, like have version specific Jenkins operator images. What we are doing right now is we are just pushing images which are, which, which we just run on the latest one because, but as we, so currently we are in, we are not, we are in, we are not even in tech preview, we are in developer preview. So in the developer preview, once we are in tech preview, we'll start kind of working on that. And that would most probably be like in by the end of the year or in three months or something. Thanks for the answer. Let's move on to the next question. So, well, this question is rather about the Jenkins, how to update the Jenkins and plugins without any downtime? This is, so recently I'll just, I'll just share my screen. So what we've been working, working on recently is a Chrome from our team has been working on the Jenkins image controller, which, which should help you do that. So what, so if you go to the, it's actually the latest pull request on top over here. So if you notice, so what this image controller actually does, it will create an image. So if you have an image that you want to update to, so if you consider you have like image one and image two, so you are at one right now with plugins set one, and you have, you want to go to image two, which has increased Jenkins version and plugin set two, which is updated plugin set, you would, all you have to do is use this image controller in the operator. This is a new feature that will roll out soon. You should be able to just say something like, like image, image build or image deploy or something. And this, this deployment will be replaced by this new deployment of the Jenkins pod without any downtime. So to fix this issue, that's why, as I said earlier, we need to move from the pod to the deployment or way of doing things. Currently, so this, this first thing that I mentioned for the roadmap, this and this need to be completed for being able to achieve that through the, through the operator. I guess I'll answer your question. I think so. So still updating Jenkins without any downtime. It also presumes somewhat sexual changes to support full, high availability. Well, it's possible to some extent, these are very strict in Jenkins, but full, high availability is also subject for discussions in cloud native seek. Because we would really like to have that, but it's a quite long story to get implemented. So approach, which is used in operator and then open shift by basically visioning multiple masters from pods. I would say that it's a real sense of high availability for cluster, not for a single master instance. Well, like I, I had a question as well. It would be okay if I ask a question. I think, if I think Vibhav is, so Vibhav on this slide, you mentioned air gap environments. And I think air gap Jenkins is already challenged. I can't imagine the challenge doing air gap Kubernetes. Are there any insights you could share with the audience about what your experience has been learning how to do air gap environment support or what are the things you would guide people who are developing for Kubernetes to think about, Hey, consider this in case someone is trying to be air gap with your deployment. So I, I for one, haven't especially worked on explicit air gap environment of sorts. But in open shift, we kind of try to, what we do is we get all the resources in a cluster, like data needed for a cluster and keep it localized to that cluster. So we download all the images from before. So in the, in the cluster of the customer or the user, and we keep it installed over there. And next time when there is an open shift update to open shift, all of that stuff will be updated once when they, when they connect to the connect for update and they are back. So it's just that one point where that update happens with all the resources. Now with the operator itself, the Jenkins operator itself, the problem of air gap is mostly with the plugins, I think, because I, I'm not the best guy to talk about the sacrum would know much better. But the main problem is the plugins because right now, whenever a new Jenkins image spins up, it tries to install on the all the plugins by downloading it from the internet. Now in a case where someone cannot download all those plugins, they would need those plugins baked into their image from before. This is one way of like air gapping it because then just there is no need for the Jenkins to install anything else at that point in time. Correct. And the other way to air gap it in case there is a need for plugins is to have like a local update center for that. So if there is a local update center, the Jenkins instances can just connect to it and then update it from there. Yeah, local update center, what we generally recommend, though update center has its own limitations. For example, there are two installer plugins, which can, for example, install Maven or Gradle from the internet. And in this case, we didn't generate local update centers for these tools. So it's totally possible. But right now, I'll be centers on the reference and download the sources from the internet. So if you use plugins like that, then if you need additional steps to implement. I think there needs to be more discussion on this for sure, because this is just some idea that we sketched, not really knowing the implementation level details completely about this. I think this might be a great conversation to have during one of the community calls for sure. Thank you. Thanks very much. Thank you. We've got one more question in the chat. Are you the version of operator SDK framework for development? Yeah. Operator SDK is the framework for development. Yeah, that's what we're using right now. So then I had another one is, can you control the resource consumption that Jenkins is allowed to use so that one of your users does not inadvertently allocate hundreds of agents? Or is that not a common problem you have to confront? How do you deal with resource consumption allocation using the operator? That's a very interesting question. So we were thinking of something like profiles, security profiles, and I forget what's called something we have sketched. Performance profiles or something. I think this is where I'm going to write this down. This is very interesting. We haven't thought of this very frankly. So you're saying what if a user spins up like hundreds of agents, like there should be some kind of gap to how much they can consume? That was what I was wondering is if I define a declarative pipeline that allows, that tries to do 50 things in parallel and suddenly is overtaxing my Kubernetes cluster or my OpenShift cluster, is there a defense for that? Or is it rather allow the OpenShift safety measures that are already there, they are enough to protect it? So there are measures on Kubernetes side to like what is the maximum amount of requests and requests a certain user can make in terms of CPU and memory, or rather how much request a pod can make. This definitely needs to be looked into further, but Kubernetes has it from before. How that would translate to a user is something you need to figure out because the operator would use the service account for the operator. And is there some kind of impersonation that we are talking about that would happen in the middle? This is something maybe we need to figure out because when the user would use a Kubernetes instance, they would basically impersonating as the service account would be impersonating as the user and then it would try to execute everything. So I need to look up on that. So thank you for that question. So I think what you're saying is that Kubernetes has already a very good safeguard against the Jenkins service over allocating resource, right? But there's not an immediately obvious way of saying I want to limit a specific user who's using a piece of that service. Did I understand your description correctly? Yeah. Okay. Thank you. The thing is, okay. We could have a chat on the soft line because this is very interesting to me because the user will allocate, but it will be through the Jenkins service account, not the user. So let's take the soft line, I think. It's very interesting to talk over. Is that it for the question? Well, there was one more that just arrived in the Q&A asking about is anyone using the HA Jenkins? So I assume that means high availability, trying to implement high availability Jenkins using the operator. Their concern is they have to, when they upgrade Jenkins or a plugin, they have to restart Jenkins. So the problem with the restart of Jenkins is different. I think the HA thing is different. The HA Jenkins would be the plugin rate for HA. I'm not sure what that would mean in this context of HA. Jenkins itself does require a restart in order to upgrade its plugins, right? Therefore, that's sort of the nature of Jenkins. Oleg, do you have some insights to offer there and what the question may in fact be asking? Well, so I'm also not sure what it would mean because operator and other similar tools, they allow to achieve HA, but to achieve HA, but provisioning multiple, yeah, sorry, high availability by provisioning multiple masters. So basically, you have CRDs and on demand, for example, when you modify the instance, then you can provision new Jenkins with new configuration. The old one is still around. And if you have SSO, if you want the proper, if you have proper routing, then from user perspective, it will be like HA because your service is not just a single Jenkins server, but multiple Jenkins service are connected by whatever additional subsystem. So what is HA here? It's Jenkins and an operator. But yeah, I'm not sure whether anybody really does it at the moment. I actually have nothing on the HA part because I haven't used it. But I think they might be talking about the restart that happens when they install a new plugin in the Jenkins. Usually, it's about that or about instance going down. So, for example, what you were presenting a few years ago, Jenkins computer summits, multi-tenant Jenkins. So when basically Jenkins master consists of multiple instances which share context and one instance goes down, then users don't notice that. In this case, I think it's rather rather related to upgrades, but yeah, again, upgrade of Jenkins master in the current section requires restart. So you can achieve HA on a system level, but not on a level of a particular master. Now, there was one more question that was asked. I'm not sure if it may have been asked directly to me rather than to all the panelists. It asks about the best practice to transfer a regular Jenkins file to a Jenkins file with operator. And for me, this is also a question because if I'm using a Jenkins file outside of Kubernetes or outside of that environment, are there some things that I should do to make sure that I'm portable to work inside an OpenShifter Kubernetes environment with my Jenkins file? So the way Jenkins even works with Kubernetes is through the Jenkins plugin. And that happens even if you're outside Jenkins, even if you're outside Kubernetes and running Jenkins on a server with Kubernetes as a secondary entity somewhere and the Kubernetes plugin has been configured to connect to the Kubernetes server and then run everything on top of that, or you run Jenkins inside a pod in Kubernetes and then it by default, it picks up all the configuration that needed to do stuff inside Kubernetes. I'm not sure if there is a best practice. The only difference would be between the configurations, Kubernetes plugin configurations. Okay, so the hint then for me is whether I'm inside Kubernetes running Jenkins or Jenkins is running outside. If I've got the Kubernetes plugin installed and I'm using Kubernetes to allocate agents, I should see similar behaviors in both cases. Yeah, thank you. So in addition to that, we got one more question about what is the best practice to transfer a call of Jenkins file to Jenkins file use the operator. We haven't covered it yet, right? This is what we talked about right now, I suppose. Yeah, at least that was what I was trying to address. Maybe I just raised the question badly or like, sorry. No worries. Okay, it looks like there is no more questions. Again, we will stop the recording and after that, we will just grant those permissions to everyone who's on the call so we can have more discussion of the record. Thanks again for your path for the presentation. It's much appreciated and we are looking forward to see how Jenkins operator evolves and what new features we'll get there soon. Thank you all for this opportunity to give the talk. It was very nice. Thank you. Thank you too. So thanks all.