 OK, it's been recorded now. OK, welcome everyone. My name is Martin D'Angiot and I'm Jenkins Google Summer of Code, Orga admin for this year. And right now we are at coding phase 2 evaluation of the program. And today we have three presentations. So before we start, I would like to remind everyone that they can use the Gitter JSOC account for any questions they have. The URL should be posted in the invitation and on the Gitter channel. So it's under gitter.im-jenkinsci-jsoc-sig, all in no spaces. I'll just lay my screen for a second. OK. So yeah, this is our Gitter channel, Jenkinsci-jsoc-sig. Feel free to join it. If you ask any questions here, we will make sure to ask them to speakers or just discuss them after the presentations. OK, thanks Oleg. So at this point, our students have worked really hard to prepare the code for phase 2. And they have prepared presentations to show their progress on the program. And we have three presentations today. The first one is multi-branch pipeline support for GitLab. The second one is the role strategy plug-in performance improvements. And the third one is going to be remoting over Apache Kafka with Kubernetes features. So I would like to invite the first presenter, Parichay, to present his coding phase 2 presentation. The lead mentors for this plug-in are Markey and Justin. OK, Parichay, you have the floor. Thanks. Welcome to the presentation of multi-branch pipeline support for GitLab, phase 2. This is a project which is trying to improve the integration of Jenkins and GitLab. Hi, everyone. I'm Parichay from India. I'm an open source contributor and a DevOps enthusiast. My mentors are Markey Jackson, Justin Haringa, and Joseph Peterson. So in the first phase, I was working on a GitLab server configuration plug-in. So I'll just quickly recap through what features that were covered in this plug-in. So it's a lightweight plug-in, and it supports Jenkins configuration as code. And it has a separate GitLab API plug-in, newer Jenkins version, which provides some UI improvements. And you can also add multiple GitLab server entries with same server URL. And you can also create GitLab access token inside Jenkins, which is required to authenticate the GitLab API that is used by the plug-in. So in this phase, I developed a GitLab branch source plug-in. The main goal of this project was to add these two features, which is multi-branch pipeline jobs and folder organization. So this completes the convention of three SCM plug-ins in Jenkins, which is GitLab API plug-in, GitLab plug-in, and GitLab branch source plug-in. So these are the functionality that each of the plug-ins provide. So the other features that include is you can check out over SSH or HTTPS remote by giving your SSH private key or your username password. Although it's not essentially required, in some cases where you have private project, you can provide these external credentials. And we have groups and subgroup support. You can choose projects from your groups and subgroups by giving your path to that subgroup. We have webhook support. This basically allows your Jenkins to create a webhook in your GitLab server. And Jenkins listens to a specific URL. And it triggers the concern based on the event that's sent by the GitLab server. So currently, it's a work in progress. The implementation required are kind of done. But there is a bug, and it fails to trigger the build. So the next is pipeline status notify. This basically allows Jenkins to notify the GitLab server about the status of your pipeline job. And we have new SCM trait APIs that has been implemented. I'll just talk about it in the next slide. And we have worked on reducing the API calls. So the indexing is faster and other functionality as well. OK, so here's a list of SCM trait APIs. You can see, for example, skip notification that allows your pipeline status not be notified to the GitLab server. And similarly, webhook mode, and check out OSSET, tag discovery. So you can also be, for further extensibility, we can also add additional traits by extending this GitLab SCM source context class. OK, so it's demo time. So I'm having a GitLab Jenkins instance running here. I will go ahead and create a job. See this. I'll choose multivans pipeline. In the branch sources, I'll select GitLab project. So basically, the server configuration part that I was talking about in the phase one was this. I am using Jenkins configuration as code. And I'm mentioning the GitLab access token here and the necessary fields that is required to configure your GitLab server in your Jenkins. So yeah, so it's configured here. And I'm choosing one of the GitLab server. And you need to provide your owner. So from based on the owner, here the list box will be populated with the projects. And you can choose one of your project and go ahead and build it. So let's say I choose this one. And I'll just go ahead and save it. So right now, Jenkins will be indexing the branches. So here, the master branch has been indexed. And test is not because the merge request has been fired from the test branch. So it just skips and directly goes to the merge request. All the magic stuff is done by SCM API plugin. And we have implemented the interfacing in this plugin. OK, so indexing has been done. And I'll go to the status. So you can see that master branch, the pipeline failed. And here, the merge request, it passed. And we'll just go ahead and see the result. We can go ahead and see the indexed index one. So basically, it builds the pipeline, it checks it out, and gives the output. So this passed. And we can see that this is the project of Webhooks integration. And initially, there was no Webhook. And I'll just refresh the screen. So we have a Webhook here that has been created by the plugin. And Jenkins listens to this particular URL. And based on that, for example, I'll just show here's my Jenkins console. And I'll just put a test push event. So this is the push, test on. So basically, this is it. So this is multi-branch pipeline job support. And we have another type, which is folder organization. So let's say test group. So what you can do here is you can build multiple projects in a particular path, let's say your username, or your group, or subgroup. So for instance, I'll show one with the subgroup. So I'll provide the path here. There's a form validation here. And it says it's a valid subgroup. And you go ahead and save it. So it will basically index all the projects there. And I have done it already here. You get the list of your projects. And you get the results from there. Yeah, these are some of the features. But not everything could be covered in one demo. So you can go ahead and try it from our plugin repository. And you can build it from the source. We have a documentation about it. And we have also made a alpha release. And so you can install it from the experimental update center by using one of the new plugin management tool that is also a part of a GSOC project. You can just download this binary from the repository and run this particular command. And it gets installed in your Jenkins instance. And you can test it out. So after testing it out, if you want to share some feedback, or if you find some issue, you can just drop your mail at the developer mailing list. Or you can just file an issue in the Jenkins era with GitLab BrandSource plugin component. Or you can just come and say hello on our GitHub channel. So we have some plans for the next print. So I'll be working on improving the performance of the plugin, which will be revolving around optimizing the data structures. And there are also some known bugs and will be discovered with the testing. So I'll be working on fixing those. And there is also some room for UI improvement. So I'll be working on that as well and implement some of the new features, like tag build strategies, et cetera. So we have also planned for implementing a GitLab Blue Ocean plugin, which will allow GitLab Pipeline support for Blue Ocean. OK, thank you. So you may ask me questions if you have any. Yeah, thank you for the presentation. Martin is trying to say something, but he is muted. Yeah, I'm unmuted now. So thank you, Perichet, for this presentation. I was going to invite people if they have questions. At this time, it's time to ask questions. Yeah, I have a question. Oh, sorry, Marke, did you want to go first? No, please go ahead. Yeah, I wanted to ask about Blue Ocean and GitLab. So do you have anything in place for implementing this plugin? And what is the current discussion status? So currently, as far as our discussion that happened, Blue Ocean is being shifted towards a different product, maybe some more improved product. So presently, we are planning to implement a plugin for that, but there was a discussion related to that that maybe we can directly work on the new interface, which is an alternative to Blue Ocean. So that's the most further discussion that we had so far. Yeah, thank you. I was just wondering, so if you added to the plan, please make sure that it's not a commitment, because there might be a lot of obstacles there. Yeah, it would be really great if we have it. It might be challenging. OK, so GitHub and BitBucket has support for that. I looked into the code base, and I feel it would be possible to implement GitLab Blue Ocean plugin. Just the thing is whether it is worth it or not. So we need to discuss about that. Thank you. And another question about GitHub organizations and multi-branch pipelines. So if you use GitHub, you get a lot of different options in order to define how builds work. For example, how to discover branches, how to filter them, how to apply permissions. And I was wondering whether you have similar engine for GitLab. Yeah, these are the functions that are provided by the SCM trade APIs. So yeah, we have support for that. I'll just go ahead and share with you what. Yeah, so you're not scrunching right now? Yeah, I'm just going to. Now you are scrunching? Yeah. So basically, you see that all existing SCM API things, like for example, permission management for Jenkins files, et cetera, they are supported out of the box by the new plugin. Yeah, if you asked about the branches and merge requests and et cetera, yeah, these configuration are available. Just can you repeat what you asked about the permissions? Yeah, so in GitHub, by default, GitHub organization, it doesn't allow you to modify Jenkins file unless you have great permissions to the repository. So the use case is pretty simple. If you have a continuous delivery pipeline, if somebody submits a full request, you want to make sure that at least somebody doesn't release your product by bypassing your pipeline and implementing his own one. So by default in a GitHub branch source, it's enabled. I was wondering whether we have similar thinking for GitLab. Yeah, so I do not think that we have that particular support for a trusted contributor, but we have a SCM create API that is by default provided. Here, we have discover merge request from folks. So what you can do is you can just go ahead and nobody you can just select. So this is one way to do it, but if you are talking about we can choose some trusted contributors, then yeah. So that implementation, which is done in GitHub branches, we don't have it in this plugin as of now. Okay, I'll check it and I will make sure to create the due to ticket if it's missing because it would be really important for J release. Thanks. Any other questions? Really quick before we move on, I'd like to just say project very, very good job. I'm super proud of you. This is a really good job. Excellent work. Thanks, Marky. Okay. Actually, thanks to the entire Jenkins community for motivation and support. Yeah, thank you too. Yeah, thank you. All right. So our next presentation is by Habu Day and it is the presentation on Remoting Over Apache Kafka with, no, sorry, sorry, Role Strategy Plugin Performance Improvements. So the lead mentor is Oleg here. So Habu Day, you have the floor and you can start presenting. Hi, thanks. I'll just share my screen. Hi everyone. I'm Habu Day and I've been working to improve the performance of the Role Strategy Plugin as a part of my Google summer of put program this year. So what we discussed today is the performance improvements that have been made to the Role Strategy Plugin and the new folder authorization plugin. Now, there have been a lot of performance issues reported to the Role Strategy Plugin and Jenkins Dira. So the goal of this project was to improve its performance. Now, we've done a couple of things to improve the performance. The first one is whenever Role Strategy Plugin got a request to find the permissions, it matched all the regular expressions. Now, what we do to increase the speed, we cash the collection of roles that match a given regular expression. The cash gets invalidated whenever a new role is added and this has provided us significant improvements in the performance of the plugin. Improving the performance in this way is a trade-off between the CPU time and the memory because we're storing all the pre-calculated data inside the memory at runtime. Now, to measure the performance improvements, we were using GMH benchmarks and from this improvement, we've seen improvements of up to 3,300%. Now, the other thing we've done is to cash the implying permissions. The Jenkins permission model follows a tree-like structure and a permission can be implied by other permissions. So for example, the permission to read a job is implicitly implied by the administrator permission. So every administrator has the access to read each and every job that's there in the system. So what Role Strategy Plugin did was to calculate each of these implying permissions for every permission check. Now, we calculate the permissions when the plugin is loaded, which means when the Java class is loaded and we store them for quick access whenever we need the calculated values. Now, a challenge we faced here was handling dangerous permissions. So in Jenkins, some permissions are considered dangerous. For example, let's consider the run scripts permission. This permission allowed non-administrator users to run Groovy Scripts which had the access to Jenkins internals and would allow them to configure anything that they like. Role Strategy Plugin supports them as an option for backward compatibility and this mode for safety is disabled by default. Now, this mode can be changed using Groovy Scripts at runtime. So we needed to take care of that and we added a hook whenever the mode changes and we invalidate the cache entries for these dangerous permissions. So, and the permissions are then recalculated again. Now, to measure a performance benchmarks, like I said before, we use the JMH Bankmax. We use the microbenchmarking framework that was created inside Jenkins test harness. During the previous phase of GSOC, you can see the blog post on Jenkins.io to know more about it and you can implement those benchmarks in your plugins too. These benchmark reports were generated during the pull request builds and we compared our results to the previous ones using JMH Visualizer. These results were also verified by running compute heavy configuration on my machine and running profiling using Visual VM. Now, the improvements that we have observed after these changes are up to 10,000% in the overall performance of the plugin. So, these are the best case scenarios but it has been verified that these performance improvements have actually been made. Now, I would like to introduce the new folder authorization plugin. This was a plugin that was created during this phase to avoid the performance issues which the role strategy plugin had. Since this is a brand new plugin, it saves us from having backward compatibility from the role strategy plugin. Role strategy plugin is more than 10 years old so there's a lot there and as a new feature or whatever you would call, dangerous permissions are not supported anymore. So, this simplifies the design of the plugin. This, just like the role strategy plugin, this supports global roles and agent roles. The agent roles do not work on regular expressions and we use folder roles for organizing permissions to projects. Now, let's just discuss some features of this plugin. Global roles, just like from the strategy plugin, are applicable everywhere inside Jenkins. The folder roles work on folders from Cloud V's folders plugin and they can be assigned to multiple users and multiple folders at the same time. The features of folder roles, the major feature of folder roles is that the permissions granted through a folder role are inherited to all the children. So, agent roles, on the other hand, allow configuring permissions for multiple agents onwards connected to Jenkins and they work on the full names of the agent. Now, we also have REST APIs for adding and assigning roles and they have full Jenkins configuration as code support. This is what a sample configuration is called and you can just specify the agent role, folder roles and global roles here. Now, I'll move on to a quick demo of the plugin. This is what the UI looks like right now. So, we have a global roles, we have a folder roles and we have agent roles. Now, let me just show you what's going on here. We have an admin role which has permissions for everything. Next, we have a read role which applies to all the authenticated users. This authenticated user group is provided to us from Jenkins code. There's another one just like this. That is the anonymous SID and that is applicable to all users whether they're logged in or not. Now, I'll just show you how this works, how to add a folder role. The first thing we have is we have three folders, root one, root two and root three and we give the read permission to user one using this role. And we have another child folder inside it and we give user permissions for configuring the job. I'll add another role here, I'll call it root two read and this would apply to root two and we'll give the permissions to read the job and to delete it and we just click add role and the role was added. Now, we have the role here and we can just assign it to let's say user two and as you can see, the role was added. Also, let's note here that user one has the permission to delete an agent that's connected to the master, that's test one but not the other two. So now let's log in as user one and the only folder he has permission to is root one and the folders can be added to the bottom to increase the permissions. Let's say he has the permission to configure the project and the folder here but he does not have the permissions to configure job one. In the same way, let's just go to the home page and see the two nodes which are connected and as you can see, the user has the permission to delete the agents but he does not have the permissions to delete test two. Now, let's just go to user two and he has permissions for root two. Now, let's just go back to the admin user and I'd like to finally show you the configuration is good working here. This is what a permission, this is what a permission looks like. This is what a configuration looks like and now these are some screenshots from the plugin and let's just compare it to the latest release and let's just compare the performance of this plugin with that of the role strategy plugin with all the performance improvements. So let's first talk about global roles and so what we've observed over 500 global roles is that the global roles in the new plugin are up to 934 times faster than the global roles and role strategy plugin. This is for a data set of 500 global roles and for folder roles, these new folder roles are almost 15 times faster than an equivalent regular expression implementation in the role strategy plugin. You can see this pull request for the benchmark results. Now, some challenges that I faced during this phase. The first one was to have efficient permission checks and we had a lot of discussion with the mentors, with my mentors to produce an efficient solution for having permission checks. Now, as a benefit, the global role checks now happen in constant time, that is big O of one. Another thing is JavaScript's JSON.stringify from prototype.js from Jenkins Core had changed either prototype.json and it was giving some wrong outputs for whenever I call JSON.stringify on an array. Next thing was configurationless code. So configurationless code uses data bound configurator for data bound constructors, which did not support, which neither supported import nor export of sets. So I had created two pull requests to the configurationless code plugins and their birth has been released in configurationless code 1.24. Now, the next steps would be more performance improvements in the role strategy plugin and improving the UI of the folder authorization plugin and writing user and developer documentation for it. And finally, a one-dot release for the folder authorization plugin. Last but not the least, I would like to thank my great mentors, Oleg, Runje, and Sukund. And thank you, everyone. Thank you to the entire Jenkins community. Please do share your feedback through either the role strategy plugin get-to-chat and or through the Jenkins developer mailing list. Thank you. These are some useful links. Thank you. Wow, this is a great presentation. I would like to open the floor for questions. I think there was a question on Gitter that was posted. Yeah, it was me. Yeah, so is it possible to set permissions for everything under the folder, but not the folder itself? That was the question. No, that's not supported right now. If the permission is for the folder, it is applicable to the folder itself and all its children. Yeah, thank you. Is that related to the plugin itself, or is that the core permission limitation? It's both. For example, to have the read permission to nested folder, you'll need the permissions for its parent also. And so we take that a step further here. So why I thought about this use case, because sometimes it's really useful for bigger organizations, because on the folder level, you can define pipeline libraries, you can define many other folder properties. And sometimes as admin, you don't want your users to modify these properties. So you want to give people a sandbox with everything we configured, so they can create jobs and whatever they want with these jobs in this sandbox, but we still don't want them to modify folders. So yeah, it might be a good use case, but yeah, again, it's rather Jira kick it or whatever to think about it for the future. I'll take a note of that. Yeah, thank you. Is there any other questions? Marquis, do you have questions or other students may have questions? I do not have any questions. Abhidha, I have a question. Okay, so the role-based strategy plugin and the folder plugin, both are different strategies for authentication, right? Not authentication, authorization. Yeah, okay, sorry, authorization. So what are the fundamental differences between these two plugins? The main difference is a role-strategy plugin works on regular expressions, which when they're repeatedly called, it has a lot of performance penalty because of that. And these permissions are very frequent. So in the folder authorization plugin, we just straight up do not support regular expression at all. Just correct me if I'm wrong with the functionality of both the plugins, same? They would allow users to have different permissions, but the role-strategy plugin can be a bit more flexible. This is designed to give better performance at a cost of reduced flexibility. Okay, thanks. It's an awesome project. Thank you. Thank you. Okay. I think we can move on to the... Is there more questions? Sorry, Oleg. I would just want to say a few words, but basically I will repeat what Marky said about Parachay. The project is going really well. Basically, we completed the entire JSoc project except a few bits like releases, but the role-strategy already got great performance improvements. We contributed a generic performance testing framework which is available for all plugin developers in the Jenkins community. And there are some ongoing adoptions already, which is great. And moreover, they will purchase two other projects like Jenkins configuration as code. If you take a look at the last release of JCASC, you may discover that the most of patches have been actually done by ABUDA, but these patches are useful not only for the whole plugin, but also for the entire community because the data-bound contributor is just a core of the integration as code plugin and is used almost everywhere. And I believe that other projects we are presenting today also use this contributor for setting up the instance. So yeah, thanks a lot, ABUDA. I think that the project goes well. It was really nice to see this metrics where we have 1,000%, 10,000% improvements. Yeah, sometimes in artificial use cases, but even for user-provided cases, we have performance improvements something like 10 times and more, which is really great. So I think the goal of the project is fully completed and we go forward by providing new functionality. So yeah, for me, this is great. And thanks a lot for all your work. Thank you. Thanks a lot for mentoring me. And I believe there's a question in the get a chat. I would love to move some of the stuff, especially the global role stuff that we've made for folder roles. But the thing is if we move the stuff from folder authorization to role strategy, that would mean breaking a lot of backward compatibility. So I think they're meant to live together right now. Basically, we had some discussions about making a role strategy plugin pluggable. So having additional selection strategies, not only regress, but for example, folder selector and something else as an extension point. It might be one of the natural involvements for the project. You just, I can just combine these plugins potentially, but yeah, right now it's not available. Okay, are we ready to move on to presentation? Well, thank you very much. This is great. So I think we're ready to move on to presentation number three. And presentation number three is remoting over Apache Kafka with Kubernetes features. And our student is long and the lead mentor is Andre Falco. But today on the call we have Vutuan who is helping with mentoring this year. Vutuan is a former JSOC student who's now mentoring this year. So long, you have the floor. And I would just like to say that Oleg, if you can take over for a couple of minutes, I'll be right back. Okay. Thank you everyone. Okay, thank you everyone. So today I'd like to present about the phase two of my project, remoting over Apache Kafka with Kubernetes feature. And before we begin, so I'd like to reintroduce myself. Long, I'm from University in Vietnam and for the project, my mentor are Andre and Kit, Vutuan, Supun. And Oleg also is a technical advisor. And the project, so the current version of remoting Kafka requires a user to manually configure the entire system, which has many moving parts as a keeper in Kafka. And it also supports dynamic agent provisioning. So it's hard to, it's hard to scale. And for this project, we aim to solve two problems. The first one is an out-of-the-box solution to provision Apache Kafka cluster. And this is what I have done in phase one. And the second thing is dynamic agent provisioning in Kubernetes. And this is what I have been doing in phase two. Okay, let's move on. So let's take a look at the last phase work. So for phase one, I have implemented a Kubernetes factory adapter class. So this class can be used to connect to Kubernetes cluster with different kind of credential, like secretive, secretive, or like maybe five or six type of credential. And the second feature is launching up Apache Kafka as a keeper in Kubernetes with one button click. And also a ham chart to bootstrap the whole system. And in the beginning of phase two, I have also released the 2.0 ANFA and publish a blog post. And this is a screenshot from the launching Kafka in Kubernetes feature. So user can put in the Kubernetes cluster connection information here and then just click the start Kafka and Kubernetes button and wait a few moment and everything will be set and done. They don't have to figure out how to configure Kafka. And okay, let's move on to phase two. So what I have done in phase two is I have implemented a cloud API for automatic agent provisioning in Kubernetes and also support Jenkins configuration as code. And I have implemented, I have integrated the feature with the ham chart and released the data version. So everyone can check out the new plugin in the experimental update center. Okay, so this is a diagram of the cloud implementation. So when user needs to start a job and Jenkins, like Jenkins, if there is no node with a corresponding label is presented Jenkins will ask the cloud to provision a new node. And this Kafka Kubernetes cloud will provision a new node which is Kafka cloud slave and Kafka cloud slave will launch Kafka computer launch with an instance of a Kafka cloud computer. Kafka computer launcher is a launcher from version one, but I have extended it so that if the computer is received in an instance of Kafka cloud computer besides launching a slave, we also use Kubernetes API to launch a pod on the Kubernetes cluster. And okay, let's move on to the demo phase. So here I have a script from phase one, the demo script. Okay, so for this demo, I will use a ham chart to launch the whole system by running ham install the chart. And then when we wait for Jenkins and after Jenkins is up, we copy the password to clipboard. So I don't have to find it in the Kubernetes secret and then open the link in the browser. So this ham chart is based on Jenkins ham chart and Kafka ham chart. And we can do QCTL to check the status. So which just ham install ham install we have Kafka Zookeeper and Jenkins both running and Jenkins may take about three minutes to download the plugin and metadata and initialize. Okay, so my connection is unstable. So I have a script to automatically open Google whenever it takes my connection is slow. Okay, we are still waiting for the Jenkins part. Jenkins is initializing, let's see the log. Okay, and my ham chart is based on Jenkins in stable repository and Kafka in incubator repository. And I also have some ham chart values for the demo Jenkins version. And I use the experimental epicenter to install the beta version of the plugin. Also install job, yes, I'm looking to initialize a job. Okay, Jenkins is up and I will look into Jenkins. In the ham chart value five, so the value five is like a parameter configuration five for the ham chart. And in this I define a JCSC config. So basically this JCSC will config the ham chart with the remote in Kafka plugin and also config job to test the agent. Let's see. So by using ham chart with JCSC, I have pre-configured everything before launching Jenkins to keep a URL, Kafka URL. And this is the cloud. And it has, this is the Kubernetes connection info. We have Kubernetes API server URL and name space. Okay, success, Jenkins URL. So this is basically the agent information description working directory and label. So the label I set here is cloud and this is the demo job. So the job is a simple one. So it do one on the cloud label. So label cloud is served by node and one cloud. So this means when running the job, it will trigger the cloud API and it's a simple job to echo hello world. Let's try this job. And when we start this job, it will look for a node with a label cloud. And if there is no node presented, it will look for a cloud, which is our cloud and it will ask the cloud to provision once agent. And this is it, it provision one agent. The agent name is based on the cloud name and a random string. So when a computer launcher launch the call the launch function, it will also use Kubernetes API to launch an agent pod inside the Kubernetes cluster. So this is an agent pod. It also has the same name as the agent on the master side. Let's check the box description. So yeah, this is the argument. It's the same as here. And it shows the image from the official image from Jenkins slash remote in Kafka agent. Okay, so the pod, the agent is online. And it will automatically run the job. So when we run a job, so when we run a job for the first time on a new agent, it will take a while because Jenkins master needs to transfer all the job files and the classes to the agent. Okay, it run a batch script. It will take a while to finalize the job. But this is just for the first run. After the first run, it will be fast because all the job cache and the classes are presented on the agent machine. So one thing I don't like about the Kafka library because it's to spam me like spam all the log here and that's something I would like to improve in the next phase. Okay, so the job success. And we already have a slave here. You can try to run again. So the second run only takes three seconds. And with cloud API, we can scale the job. Like whenever you don't have enough nodes, cloud API will create an agent and then create a part in Kubernetes. And then we can scale as much as a Kubernetes cluster can handle. And this is a nice change in technology in Clarity. And after a slave is created, it will stay here forever. And you will have either manually or some audio issues. Same here. We have audio issues. Atty, he lost the connection. Can you hear me now? Yes. Hello? Okay. Yeah, I just lost the connection. Okay, I will say again. So we will stay here forever and you will have to either manually delete it or manually implement it in phase three. These are Jenkins can automatically terminate agent after running the job. So now we have to manually. And when we terminate the agent, the whiteboard will also get terminated from the audio. From the audio issues are still ongoing. Okay, we see the next slide. Try to continue, please. Okay. So one challenge, one blocker interface is the SRF version in the plugin palm file is 1.7125. So I couldn't update the version of some dependencies on my plugin like Kafka 2.3 or the new Kubernetes Java client lib library because the new dependencies use SRF 4.j 1.7126 and yeah, this is, and I don't know how to service this because OLS says that we cannot update the plugin palm to 1.7.26. Got any conflict with the old Jenkins core question? Yeah, we can actually solve it. It's not a big deal to solve it for plugin palm. The problem is for your plugin because you need the recent Jenkins with the version to pick up the kicks. So we have released your patch in Jenkins core in the latest week there. So basically if you wanted to include this patch for remoting Kafka, you would have had to bump the dependency. Yeah, okay. Thank you. It's possible, but yeah. So regarding plugin palm, let's take it offline. There are ways to work it around. For example, the most simple way is to subline the version of SRF 4.j as a property so that you can overwrite this property in your plugin. And that's it, it will work. Yeah, that's the correct solution. Okay, so that's something that we may try to do in phase three. And the next phase of phase three plan is, okay, so now we have 2.0 beta version and most features are implemented. And we just need some more in this test to officially release 2.0 version. And the second thing is retention strategy. It's something I said earlier that the agent will stay forever after created. And retention strategy is that user can choose so that agent can terminate immediately after running a job or it may wait for a timeout before terminating. So like the number of agent can scale up and scale down automatically. And that is something I'd like to do. And so for now, there are many new things and there isn't any documentation. And I would want to write more documentation and quick start guide so that new user can get used to our plugin. And the next thing is integration test with Kafka and Kubernetes. So the plugin now has many moving parts. And it would be nice to have an integration test with all the module like Kafka and Kubernetes. And the links and finally Q and A. Okay. So I think there's some questions. Yes, thank you. I'll let you answer them. Okay. So is there any type of Kubernetes RBAC associated with Kafka? Is that part of the ham chart? So yeah, that's right. Kubernetes RBAC is part of the ham chart. So I will try the screen again. Okay. So the ham chart for remote in Kafka agent is a very simple one. It mostly use the power of the Jenkins ham chart and Kafka ham chart reconfigure some small value. And let's look at the Jenkins ham chart. So this is the official Jenkins ham chart and it has already had all the RBAC service account for Jenkins master role and role binding. So the role for the RBAC role for the Jenkins ham chart is only involve the part, get list and create. So that's it. The answer is RBAC is integrated in the dependency ham chart. Thank you. Okay. And now the second question from Markey is, I noticed your code in defining a node part in the node URL. Is this configurable with an incorrect controller? Okay. So this, we can also configure this by overriding but overriding Jenkins ham chart configuration. So, okay, let's look at the default value again. So by default, Jenkins ham chart use a load balancer to expose the service, but I use midi cube and I don't have load balancer. So I have to change the service type to node part. So- Okay. That makes total sense. That answers my question. That's it. And a question from Justin, are you incorporating your changes into the main Jenkins ham chart? Or would it be a separate ham chart? Okay. So I think this will be a separate ham chart. So this is just a use case of the official chart. So I mostly use Jenkins ham chart and Kafka ham chart and override, lots of small override. Here you can see it's pretty long, but because this is for demo purpose, in the actual ham chart, we only, it will remove most of this. So just the minimal overriding setting for the, to install the plugin and configure it so user can try out. So the answer would be, that will be a separate ham chart. And do you have any more questions? Actually, I have a question. The code you're showing there in the YAML, it's not related to your plugin, but these YAML files that you are showing, I'm not familiar with what they are. Like, can you? Okay. So this is the physical ham chart values. And ham chart value is another word for ham chart configuration. And basically in one ham chart, okay, here is the official ham chart and it define many configuration here. And now I am a user of this ham chart and I can override any configuration here. And this first part Jenkins is for the dependency. And this is the configuration I override. So I override the version, Jenkins version. I override the plugin install to the JCSC. That's it. So this is ham chart configuration. The sound is cutting off. Okay, we cannot hear you long. Ham chart value is ham chart configuration. And now we can override setting in the parent ham chart. And in this value five for my ham chart. So for the Kafka ham chart, I just override a small setting so that it only run in me, sorry. The quality of the sound is, it's cutting off. But I think at the high level, I understand your answer. So, yes, you've answered my question. Thank you. Okay, thank you very much. Do we have, is there other questions or comments? Yeah, I had a question about the chart museum metadata for deployments. So basically, once you have ham chart in the repository, it's relatively easy to deploy it to local chart museum instances. But yeah, I was wondering whether you plan to have some automation for that. Oh, that's really cool. I don't know about ham chart museum. So is it, is it, okay. So, yeah, I may write a helper script so user can install the ham chart in ham chart museum. For now, I only plan to host the ham chart in the GitHub repository. But it would be nice if user can copy the ham chart into their own repository. Well, basically there is a lot of machinery around that. So once you get integrated into GitHub master branch, everything goes, it becomes relatively simple. So yeah, there might be few scripts. Okay, probably subject for the discussions. Okay. Thank you. Oh, thank you. Thanks for saying everyone for listening. So if you have any question, please don't hesitate to ask. Very well. Thank you very much, Long for this. And what one did you want to say something? Yeah, okay. So I think in overall long did a good job for this second phase. We managed to deliver our target feature on time with this phase. And so I think we did two release and five beta during this second phase. And so long have was like fixed some box of, some box of exiting plugin. So I very appreciate that. So yeah, we're looking forward to working with Long for the test phase for the improvement. And hope that we can release a 2.0 release. Yeah, thank you. Thank you so much. Okay, this is great. So I would like to congratulate all the students for the work that you have done so far and wishing you the best for coding phase three. And I want to thank all the mentors for supporting our students. So thank you very much. And I believe this concludes our presentations today. So thank you for watching and see you online everybody. Yeah, just thought some bits. Tomorrow we have a second part of presentations. We will have four presentations by five students. So tomorrow it will be not only the Jenkins JSOC, we will have people from outreach joining. So we have another project going on right now. And yeah, there will be also one student joining from another JSOC organization to present her work about Jenkins. So we have the same time for presentations. If you have any questions meanwhile, just a comment in the Gitter Chat. And we can ask all students to add links to their presentations in the abstract document. We have shared in the meeting provides. Oleg, I just wanted to add something. Is that okay? Yeah. Related to my project. So you asked me about the Blue Ocean plugin. Okay, so the present plan for Jenkins is like they want to, there is no concrete plan as of now, but there is a plan to modernize Jenkins. So Blue Ocean would be improvised. And so my plan was to implement the GitLab plugin in Blue Ocean so that I learn about the architecture of Blue Ocean and I can contribute to the modernization of, I mean, the Blue Ocean plugin. So that was my plan right now. Yeah. So I believe for this topic we'll be heavily discussed at the DevOps Vault, Jenkins Vault. So they will be contributed to summit on August 12th. And most likely web UIs and web UIs modernization will be on the agenda for these discussions. So let's stay tuned to see what would be the final outcome of these discussions. And yeah, thanks a lot for your interest in this area. Yeah. Thanks. Okay. I think we're good, Oleg. Yeah, I think too. Thanks again to everybody who was watching the presentations. Yeah, the video will be published immediately once we stop the broadcast. Again, if you have any questions, just join our GitHub chat. Okay, thanks everyone. Thanks. Great job, everybody. Plus one. Okay, I'll stop the broadcast. See you. See you.