 Welcome everyone who is watching this demos. Today is June 25th. We have a first part of Google Summer of Code demos by Jenkins students. We have two presentations today. For each I will talk about multi branch support for BitLab ICM, then a video will talk about whole strategy performance improvements and performance test framework for Jenkins, and then Prostik will talk about ongoing progress for pipeline support in a promoted builds plugin. Just to make sure, do you see my screen? Okay, I guess so. This meeting has been broadcasted in YouTube. If you're interested, there is a link. And if you want to ask any questions, please ask the questions in the chat. Jenkins project has a special chat for Google Summer of Code. It's Jenkins.js.js.soc.seq, special interest group. And the demos, if you're watching YouTube, just join to this chat and ask questions there. We will make sure to ask them to the presenters. And yeah, that's it with the introduction. So, as I said, today is only the first part of the demos. Tomorrow we will have a second part for PMUTC. So there will be another three presentations there. And yeah, this is just the beginning. It's the first part in phase. So stay tuned for more demos and more presentations in the next few months because there will be a lot of things to show as part of Google Summer of Code. Okay, Parish, are you ready? Okay. So I'll just turn off my camera to save my bandwidth. Okay. Can you guys see my screen? Yes, we need to see your screen and your mark as presenter. So everything will be recorded correctly in the tool. All right. Welcome to the presentation of Multi-Blan's Pipeline Support for GitLam, GSoc 2019 project under Jenkins organization. Okay, so this is about me. I'm Parish. I'm from Odisha, India. I like to contribute to open source projects and I'm also a big football fan. And these are my mentors. They help me with code reviews, give suggestions and also contributions to the code. So the goals of the project, we are implementing a GitLab API plug-ins that wraps a GitLab for Java APIs. Then we'll create a GitLab plug-in that is lightweight and depends on GitLab API to delegate, I mean GitLab API plug-in to delegate all the GitLab API calls. And a GitLab brand source plug-in for Multi-Blan's Pipeline Jobs and Folder Organization. Multi-Blan's Pipeline Jobs are for repositories with multiple branches, either for creating new features or the multiple branches can be for creating new features or for creating merge requests. And Folder Organization is a collection of these Multi-Blan's Pipeline Jobs for user or a group. I'm having some background noise. And a GitLab Blue Ocean plug-in for Pipeline Jobs which is based on GitLab because it is like we have, presently we have support for GitHub and BitBucket. So this will add GitLab support in Blue Ocean. And we are also aiming to have a clear and efficient design of the plugins, trying to make a clean and bug-free UI and also writing modular codes for future extension of these plugins. This is the first release of our project, GitLab API plug-in. It basically wraps GitLab for Java APIs. So why do we need an API plug-in? Okay, so there are two reasons for this. I mean, the first reason is about why we need an API plug-in in Jenkins. This is to, in order to have a centralized control of API releases, this reduces the maintenance burden of the child plugins that depend on the APIs. So we can create releases from the one GitLab API, I mean the API plug-in and all the rest child plugins can decide whether they want to upgrade or not. It depends on them. And the reason for creating the specific GitLab API plug-in is that we want, developers want the APIs to be stateless. I mean, they just want to make API calls to an endpoint and not concerned with the API implementation. So this is a practice used by GitHub plugins. So we are trying to do that in this. And there are multiple reasons why we chose GitLab for Java APIs that can be seen in the Wiki page. I'll leave a link below. GitLab branch source plug-in. This is, I'll just give a quick description of what the features there are and then we'll go directly to the demo. So this part is about, we have implemented GitLab server configuration. It basically configures a GitLab server in Jenkins. There are some new features such as creating a GitLab personal access token inside Jenkins itself. You do not need to go to a server to create access token directly to it in your Jenkins and to allow duplicate server entries by providing a unique ID to them. I mean, you can have the same GitLab server but multiple entries of that. And we are presently targeting to support GitLab server versions above 11.0. But since our API support has calls for, I mean, support API calls for servers about nine and above. So in future, if users request or there are need for, there is a need, we can do minor changes to the code and we can support lower versions as well. And we are adding JCasks support which is let the user configure the UI without going actually to the UI and just by using a YML sign. That is quite convenient for users especially using continuous integration. Then this part is about how it helps the developers. We are creating a cleaner and I mean, there is a GitLab plugin but what we are trying to do here is I mean, a cleaner and model code ways which supports API plugin features such as token creator, etc. This plugin is quite inspired from the other plugins like BT, GitHub and GitLab plugin as well. So we have support for GitLab API plugin incremental tools as well which is used to create incremental releases that can be used by PRs to check whether the downstream plugins, I mean, the child plugins that depend on the plugin work properly if there is a work in progress on a new release. And Maven check style which enforces good coding style and Java 8 compatibility as well. We are using trying to add as much as Java 8 compatibility using Streams API which was introduced in Java 8 and it makes more modular and more readable. We are using that and we are using one of the later versions of Jenkins which is 2.1.50. The main reason is it implemented a persistent descriptor that automatically calls load meaning it loads XML data from the disk when the object is initiated. I mean, the UI reflects the state of the object at runtime. So this was added in this version of Jenkins. So we are depending on that and we are right now discussing that if it is possible to upgrade to the latest version of Jenkins in the next release. Okay, so this is the code organization. We are using IO.Jenkins of plugins package and there is a package which is GitLab server configuration which contains code related to server configuration in our plugin. The credentials part, it implements, it extends an ID credential which implements a personal access token which is specific to GitLab only. So this way the credential at least box gets populated with our specific GitLab personal access token. And here the servers we have server configurations. A GitLab server it represents one, it is a abstract describable and it represents one GitLab server entry. It has all the fields and etc. And GitLab servers it extends global configuration and it also implements the persistent descriptor that I described how it helps to use the object state at one time. And we have one which is this extension which is a GitLab personal access token creator which basically helps us to create a token inside Jenkins etc. So this is our UI. This is the basic configuration of GitLab server. There is a unique name that is generated by which the GitLab servers are filtered and you provide the GitLab server URL which is by default populated with the link to the GitLab community additions as server. And here is the credential at least box that you can populate by selecting the drop down. And if you want to manage the GitLab webbooks on your Jenkins server so you can check this option. Right now this is a work in progress because this has a correlation with branches so after implementing the branches we will implement this. This is GitLab token creator. So you provide your GitLab server URL here and you have two options either use your credential that is a persistent credential, username password credential that persists in your Jenkins. You can use that or you can directly provide your credential to the UI, I mean Jenkins configuration and you can create a token. It will give you an idea of that. Okay so I will go to the demo of how it works. So this is my local Jenkins instance. Go to manage links with configure systems. So I have a GitLab server configured here. This is delegated. Okay so for the initial setup you will get this state and you have to add your GitLab server here. So this is the unique name and server URL credential. So this is a previous idea of the token but maybe you want something else. So we can directly create our credential from here. We click advance convert login to login password token. So this is the server that I am providing. So I do not have a credential so I will use my login ID password. Just communicating with the server to create a token for you. So the credential is created with this particular ID. That is what the D29C. So I choose this D29C. We have a web book. So this is a work in progress. I am just ignoring it right now. So you can just test the connection here. So credential is verified for user. And if you see this, this is an invalid token. So it says not valid. This was about how you configured using the UI. And so as I was talking about Jenkins port as configuration, this is the yml file that you can use. This is a sample of that. So it contains two parts. This is the credentials. You can define the scope, give an ID. You can define a token here. So this is of course an invalid token but you can either directly give your token here or you can use an environment variable for that. Jenkins has a nice documentation about handling secrets in the repository. And the second part, it deals with our plugin which is creating a server entry with credential ID which is referring to here. And if you want to manage hooks and name of that ID, this is optional and the server URL. So I'll just show how it works. So this is my Jenkins yml file. You can go to this path, configuration as code. And I paste the path and I'll apply configuration. But it says configuration applied. I'll just reference this page. So this is our, we supplied the server URL, gitlab.dom.org and this has created entry for that with the server name as well. And the credential ID selected and managed hooks. Okay. So this is all that I've worked in this phase one evaluation, I mean phase one coding case. So these are the resources that basically is the source code, source code and wiki and the issues that we've solved and the open issues as well. And here is my blog link if you want to know more about the progress and what all work I did, you can just refer to my blog. I write weekly blogs. Okay. So this is me at the beginning of coding phase but slowly I learned from other plugins code base Jenkins documentation and mentors code review. So this, this phase we've solved a lot of bugs and it was fun. Okay. Thank you Jenkins and Jesus for giving me this opportunity to work on a real world project. And thank you too. So the project just stopped, I guess. So Martin or anybody else would you like to add something? I would like to add Prachi, this is really really good and I am really proud of you. You did a lot of hard work and it shows. So thank you very, very much. Thanks Vaki. Thank you. Are there any questions? It looks no. So in Jesus chat there are some discussions about which dark protein we use for Jenkins. But yeah, not that much specific to the project. Okay. So what was the question Oleg? Which dark protein do you use for Jenkins? Because on your site you had a dark protein from UI. I'm not able to hear you. Sorry. Is my audio good enough? Yeah. I can hear you now. Yeah. So I was just asking about the dark theme you were using on your slide somewhere. So is it an existing theme for Jenkins? Or how do you get that? Yeah. It's a Google slide. They have an option for dark theme. Oh, so it's just Google slide. Okay. For the Jenkins UI you mean? Yeah. The screenshots. So they directly have an option to convert it to invert the colors. So nothing specific to Jenkins. Okay. Thank you. So if there is no other questions, we can just proceed to the second demo. Amidya, I read you. Yeah. I'll share my screen. Okay. Hi everyone. I'm Abhidhe. And today I'll be showing you a presentation on running GMH benchmarks for Jenkins plugins. So let's just start with what is GMH? GMH stands for Java Microbench Mark Harness, which is an OpenJDK project. What GMH basically does is it runs the benchmark codes multiple times. First it runs warm-up iterations to warm up the Java virtual machine to allow compilation into Java bytecode. After that, when multiple benchmarks have been run, it can calculate various measurements like average time, the throughput, that is the number of operations, et cetera. And finally, to improve the accuracy of the measurements, GMH runs the benchmarks on multiple folks of the GMH, which leads to reducing the errors in measurement. Now, let's just first start with a brief overview of how this framework works. So what the framework is available through Jenkins Test Harness, and you can easily use your Jenkins inside the benchmarks. We have a Maven profile and a plugin pump. For running benchmarks. And you can even run your benchmarks directly on CI Jenkins.io. There is a build pipeline step for that. And you can also use configuration as code for setting up the benchmarks. Now, to run a benchmark, you basically have to run it directly from a JUnit test. All the benchmark classes that are annotated with a GMH benchmark are automatically found. And a temporary Jenkins instance for each individual folk of the GMH benchmark is started. So let's take a look at how a benchmark looks. So basically what you have here is you have to create a state class. These state classes are automatically instantiated by GMH. And these are the only way to pass information into any benchmark. So we have two methods here for setting up and cleaning up your resources. The first one is the setup method and we have a teardown method here. Then your benchmark methods are annotated with benchmark. And you take the state as a parameter to the benchmark function. And your benchmark code goes here. Now, to configure the instance that was started for the benchmark, you can either use your Java code, your methods that you have in your plugin, or you can use YAML files from configuration as code. We do not yet support using local data. That is the config.xml files for Jenkins home. But that is a future improvement. Now, let's take a look at how you configure a benchmark using configuration as code. So just like the previous one, instead of GMH benchmark state here, we use the configuration as code GMH benchmark state. And we need to provide, we need to override two methods. The first one is the path to your YAML file. And the other one is the enclosing class, the class that contains the benchmark. So in this case, it's the sample benchmark and you can continue like you did in the previous benchmark. For viewing the benchmark reports, the benchmark reports are available through either a Java object. That's available inside your unit test that you're running the benchmark from. And analyzing it, you can even fill the bills if they fail to reach a given threshold. The reports are written to the disk after the benchmark completes in JSON. And these reports can include multiple measurements like I said before. So you can measure the average time, the single short time, that is the time without any warm-up and many more things there. For ease of visualization of the reports, there's a GMH report plugin for Jenkins or there's a GMH visualizer website to which you can pass the JSON reports. So this is a sample report that was generated using GMH visualizer. This is an actual pull request from a roll strategy plugin and you can see how big of a difference we can get from a small change here. Now let's take a look at how it all works. So basically it all starts from a Jenkins test harness which contains the benchmarking framework. This Jenkins test harness is made available through Plugin Pump. That's 346, the latest release. And configuration as code uses them to configure the benchmark using configuration as code. Then we have the pipeline library that has a run benchmark set which runs benchmarks on CI Jenkins IO as a part of your pull request build. And finally we have a roll strategy plugin which uses all of them. So this basically contains all of the benchmarks that were created during this coding period. The benchmarks are now run as a part of the pipeline and now these will help us in the next phases to improve the actual performance and measure the performance improvement of a particular change. Some bugs and challenges that I faced. So the first thing is the test-crump issuer from Jenkins test harness will refuse to be martial. That is something related to configuration as code and JEP 200. The other thing is the Jetty server on which Jenkins runs is leaving straight threads. So this causes the JMH benchmark to wait for 30 seconds for each fork and then it forcefully kills the JVM. Another issue that I faced was I used the reflections library originally for auto detection of benchmarks but that seemed to cause version conflicts with the Guava that was in Jenkins code. So now that has been fixed. So the next steps would be more benchmarks for the roll strategy plugin. And I'll start working on improving the performance. Then we'll use the benchmarks for measuring the actual performance improvement and address feedback from the adopters. Next I'll just move on to running a simple benchmark from the roll strategy plugin. So we have one benchmark here that is using configuration as code. This is the benchmark method. This measures the time to create an ACL object that is something related to the roll strategy plugin and we have a benchmark runner here. So inside your ID you can just run it like any other JUnit test. Let's just wait for it to compile. This is configuration as code setting up the instance. Now as you can see the first form of iteration has been completed and it took 0.84 of a microsecond for an operation. Let's just wait for the benchmark to complete and I'll show you the results. So as you can see here, JMH is running multiple iterations of the same benchmark. This same process would be carried out in another fork of the JVM. So in the meantime I'll just show you a report that was created previously. So this is what a report looks like. Now this report can directly be fed into JMH visualizer. So we have a report and we just put it here and we can just see how fast it's going on. So I would request you to please contribute to help improve this framework. You can either contact me through the Gitter chart or through Jenkins developer mailing list. Last but not the least I would like to thank my mentors Oleg Nanashev, Rondesia and Sukun for their help and support throughout the coding phase and thanks everyone. Thank you for the presentation. Any questions? So no questions in the chat as well. But at least it's not the first time when we have this presentation. It has been presented at platform seat meeting at project meetings before. So maybe many people already asked questions there. And yes, in any case we will get the video posted and we will use it. So as I mentioned I would like to say that the video does a great job. So for the first coding phase we didn't really expect the framework to be fully productized. So we started from implementation in the whole strategy plugin and it was completed maybe in the first week of coding or something like that. After that we spent a lot of time productizing it moving code to Jenkins, Tess Harness to plug in POM, passing through code reviews of maintainers, documenting the things. And yeah, I would say that maybe half of this coding phase we actually spent not on the implementation of the framework itself but on productizing it. I guess it was quite an experience and it's really great to see that because now all these components are already available to all users in production components. So you can just take them and use them and it's really easy for developers. I already integrated a couple of tests in my own plugin and yeah, I'm looking forward to seeing more adoption for this framework in Jenkins. So yeah, thanks a lot of you there. It's a great progress and yeah for all strategy plugin itself we've got a couple of performance fixes which should improve the situation a lot in the next days. Hopefully we will enter this code soon as well. Thanks a lot for your support too. Yeah, thank you. So, no new questions? Okay, so Prasthik, are you ready? Yes, Alek. Okay. I see my screen. Hello everybody. I'm Prasthik Gyamali and currently I'm doing a project under Jenkins as part of the Google someone of code program and my project is Artific Promotion plugin for Jenkins pipeline. So I'll quickly give a short introduction about myself. I'm Prasthik. Currently I'm pursuing my engineering studies from NIT Jaipur, India and I'm currently in my second year of my engineering studies. Also this is my first time participating in the Google Summer of Code and also my interest and inclination towards software automation has brought me to contribute to Jenkins and I'm very... So apart from that I'm also interested in data structures and algorithms of complex systems and I love participating in hackathons. So that is all about myself. So talking about mentors, I've had four mentors John Cross, Alek and Max two of them are technical advisors. So like in this presentation I'll give you a short overview and an introduction about my project the updates and also the updates about what I've been doing in the past month and also about the future implementations and approaches to competing this project. So that is what I'll be sharing. So first of all I'll quickly give a short overview of my project so it all started with the promoted builds plugin not supporting the pipeline structure probably because it was developed before the pipeline was introduced in Jenkins so as a result of which the current implementation does not support pipeline. So this is the reason behind the inception of this project is to make the promoted builds compatible with the pipeline. So like what would the new feature do? The new feature would run an on-demand promotion when the pipeline builder is already completed inside the pipeline and the promotion would be in a way a bit different than the conventional promotion in such a way that the new promoted build would instead trigger a new promotion job a new promotion job would be triggered after all the promotion criteria conditions are met and this promotion job would be containing all the build and job information of the upstream jobs that actually triggered it so this standalone promotion job would be having all the information of the upstream jobs that triggered it and all the conditions and all the conditions is related to it and also after that we also expect to attract the flow of those builds inside the Jenkins by using the fingerprinting engine however like this this was our initial planning there were certain changes incorporated in the community bonding phase and in the first coding phase so I'll talk about the changes what were the actual changes decided in this phase first of all a new plugin as the name suggests artifact promotion plugin a new plugin won't be made so what would be happening is we'll just incorporate all these new features and we'll just merge them into the existing promoted builds plugin and we'll be having a type of monolithic code for both freestyle and pipeline projects and the reason behind this is because even if a new plugin is made even if a new plugin is made a lot of dependencies would be still come from the existing promoted builds plugin so it is we thought we discussed this with our mentors and we thought that it would be better if we incorporated all these changes into the same plugin and have both the functions inside the same plugin so and also what I like to say is in the existing promoted builds many of the modules extension points are tightly coupled refactored before using them for pipeline so our main focus the first focus was to refactor the existing extension points instead of detaching as mentioned in the proposal instead of detaching the existing extension points and making a duplicate of those extension points we'll be refactoring the existing extension points already being placed in the promoted builds plugin and also we'll be making new classes for other classes like promotion processes these are very much big modules of the promoted builds and we think that instead of refactoring just making an intact duplication of these modules would save a lot of time and help the plugin work smoothly so the new workflow the design for the pipeline promotion that was decided after all these changes was that input like previously like currently what happens in promotion is input is taken from the web UI but instead of that we'll be since we're using since we're aiming to use it in the pipeline we'll be taking the inputs from the declarative pipeline scripts making scripts and giving commands through the pipeline script and the same extension points that have been used in the freestyle project would also be used in the pipeline promotion but after proper refactoring with the pipeline by making them pipeline compatible also and also as I've already said that new analogous classes would be introduced and after that if the promotion criteria conditions are met we'll be triggering a new promotion job as already discussed and pass the build information to the new promotion job and use fingerprinting engine to track so this is the final design for the pipeline after discussing about all the changes so next like this flowchart would give you a short description about like how the previous promoted builds and the new anticipated promoted builds would look like previously it would just the input would come from the web UI now it would be coming from DSL from the declarative pipeline and after that we'll be refactoring the promoted modules promotion modules and after that we'll be triggering a new promoted job and after that promotion would be deemed as successful so this is how the promotion would be working in the pipeline so like talking about the progress of coding if you want I'll give you a short update about those it was more or less focused on the refactoring of existing extension points also there were some attempts made to introduce new pipeline promotion logic by making new promotion classes and also some DSL scripts steps for self promotion was also introduced but these are still in the development phase since these classes are highly evolving as we go on adding new classes these classes will eventually evolve so these are still in the development stage and some of the extension points like promotion badge, promotion condition were also refactored but they're still undergoing the unit testing before getting merged into the promoted builds and the uniqueness about these extension points is that they highly depend on other modules as I already mentioned that they are tightly coupled to other modules so for that to refactor something as promotion condition descriptor we had to again changes to many of those dependent classes like upstream promotion condition self manual these were also changed and also new classes interfaces like promotion run etc. were also added but still they are in the unit testing phase since this coding phase was more or less related to the refactoring of the promotion logic a particular demo for this is I don't have a particular demo right now so I just without further ado I'll just go on now with the plans that I plan to execute in the next coding phase is to just complete all the refactoring that are not done in that are not completed in the first coding phase and also you need tests of all those modules and introduction of new DSL steps to be done in the next coding phase and these are all the relevant links if you wish I can put them on the grid a chat so this is all from me today thank you thank you for the presentation any questions from anybody it looks like we didn't get so many questions during the morning sessions so yeah let's see how we improve it next time Klaus would you like to add something like to know so yeah if you have any questions about the presented demos please go to the Jenkins JSOC webpage just a second I will probably just show it so if you see my screen you can go to Jenkins your projects JSOC if you go to this page you can find all information about the ongoing work and about ongoing projects for Jenkins we started from seven projects one project has been cancelled by now but still there is a lot of projects and a lot of interesting information you can find for example if you want to know more about raw strategy performance improvements and about performance test and frameworks you can just go to this wiki page you can find all the information and if you want to ask any questions each project page has links to Gitter channels in order to join project charts and to ask questions in many cases they just integrate these product charts for example this one is a raw strategy plugin it's a Gitter chart for all strategy pretty much the same for other projects so for example if you go for multi branch pipeline for Github it uses Github branch source plugin as a link for promoted builds there is also a project chart being used although the website has been already updated so you can just go here and find the demos what happens next for JSOC projects as I said tomorrow we will have another set of demos so there will be three demos one for plugin installation manager CLI tool by Natasha another one will be about the modem for patchcraft for working on Kubernetes by long and there will be also demo for Github now plugin UI improvements this is a major project for Cheney call the UIs for that so if you're interested to see what's going on in these projects just join these demos and in one month we will have another wide session where we will have more presentations and we will be able to present content which is more ready to release and hopefully all the projects will have alpha or beta releases by this time okay and that's it from me any last minute questions I don't have any last minute questions but I would like to congratulate everybody it's a very good job well done indeed so JSOC always helps us to target some critical areas of Jenkins which we didn't use to get much attention before for example support of pipeline and promoted builds is one of the most voted G-rated kits, more than 100 volts pretty much the same for multiple branch support in G-Club and all strategy plugin gets a lot of explanations about performance issues so I believe it's also a great project especially since we also deliver a test framework for all plugin developers so yeah thanks all for your work see you next month thank you everyone I'll stop the broadcast if somebody wants to stay on the call after that to discuss some topics we can do that okay