 Hi everybody, welcome to the second Google Jenkins Summer of Code presentation sessions. Today we conducted this session as Jenkins Online Meetup, so we'll have some introduction and then the students will do their presentations. Let me share my screen, okay, do you see my screen? Yes. Okay, so we have all JSOC students on the call, plus some mentors, plus some participants. If you want to join the call and ask questions, we also have Jenkins Gitter Chat, there is JSOC-SIG there, and you can find the participant link there, and actually it's a main channel for asking questions, but if you also want to ask questions without using Gitter, we have Jenkins Chat in RC, so if you ask questions there, I will process them and ask them to participants. So yeah, let's start from the presentation. So as I said, this is the second session, the previous one was four weeks ago, and all the students have already presented their projects, but today we do these presentations from scratch. So if you have not participated in the previous Meetup, please stay, you will see all the introductions etc. Yeah, as I said, there are two Quay channels, so please feel free to join any of these channels. Okay, so just to provide some introduction, what is Google Summer of Code? So Google Summer of Code is the biggest open source initiative and event for students. So the thousands of students participating in Google Summer of Code, they join different open source organizations and work with mentors from these organizations in order to develop some open source code, and yeah, this year Jenkins participates in Google Summer of Code. This is our second year. We have participated two years ago, we had five projects and some projects were successful, so it was a nice experience for the organization and we want to participate again and again. And yeah, this year we joined the project and we have three students. So Srinjushank is working on a core coverage API plugin, Wutong is working on a remote over Kafka, and Abhishek Gautam is working on a simple pull request job plugin. If you want to find more information about these projects, you can go to the project page, Jenkins.io, Projects.jsoc, and you can find all information there. So you can go to any project, for example, this remote and cover message bus, and here you can find all information about the project, links to the presentations, to the previous blog posts, and if you want to find more, you can also join a project specific Gitter channel and project GitHub repositories. So what do we have now? Abhishek is a multi-month project. It has started in February, now it's July, but we also have four weeks of coding ahead. So maybe after these months of coding, we'll have a second iteration, but even now all the projects have reached the demable stage that you can find this project in Jenkins Experimental of these centers. So we decided that it's a good opportunity to showcase these projects and probably to get some feedback from the community. So let's continue these presentations. Let's start from Shengyu, and then we will proceed with other presentations. So Shengyu, are you ready? Yeah, I'm ready. Okay, so if you share your screen, I'll make you a presenter. Yeah, so can you use the mass screen? Just a second. So yeah, I've presented you to everyone, so now all participants should be able to see it. Yeah, okay. So I have participated in this year's G-Socker G-Socker Check-In project and worked on Code College API programming. My mentors are Stephen, Shopen, Jeff, and Olican, they give me much help in this project and thanks them. So about me, my name is Shengyu Zhen, and I am a third-year student in computer science and technology at Henan University from China. I like programming and reading, especially reading some books about history and philosophy. So in phase one, I had planned those things, prototype based on cabal-tula blockchain data model, integrate other Java code technology tools like J-Coco, O2-detect support, health report support, pipeline support, and threshold support. Also I have modernized the report and like some unit tests. So in phase two, I had planned to do those things, APIs which can be used by others like integrate cabal-tula programming with Code College API programming, like provide Java API and REST API for getting college information and implementing abstract layer for other report format like JSON. Also supporting converters for non-Java language. I only support Java language college tool in phase one, supporting kind of reports using a build in college result page and refactoring the configuration page to make it more user-friendly. So first, I have done the integrate other plugins. So for now, I integrate a cabal-tula plugin with Code College API plugin. And I also had a new API option in cabal-tula plugin like the screenshot. So now we can enable Code College API plugins in cabal-tula plugin when we click this checkbox and it will generate a new format report with new, with more fancy UI chart. Also I will provide some more REST API. We can get a college data, get a trend data and get a college data of REST build, get a trend data of REST build. So I will demo it. So as you can see, we have a college result page in the Jenkins. We can get a result page by this. We can get a college result information by this, in JSON format by this link over here. And this, it will get more information. Also you can get a trend data from more, yeah, trend data also by XML format. As you can see, we can also get a trend data. And also the last trend and last college data, besides the REST API, I have also supported more college calls like LLVM college JSON format report. And you can see more information in this pull request. It will generate LLVM or college report, it will show LLVM or college report by more fancy UI, like that. Yeah, we can get a function first under the REST college data. Also I have other improvements like add source code navigation in college result page. So for now, we can have, we can use the source code with the college information in our college result page. And also we factor the configuration page to make it more user friendly and the user can now more easy to use or plug in. So the steps of this one plan is that I held down those things, except the supporting companion reports within a builder. In phase three plan, I plan to supporting more college tools like a g-call, like a college.py and other college tools. Also I will polishing the code fixing bugs and writing developmental documents and user documents and final I will releasing the plugins. So this is the links for later results. You can get more information from those links. Yeah, thanks. Thank you for the presentation. So I've got a question. Could you please show code coverage browsing if possible? I mean in your demo? Oh yeah. You like that? So what's the question? So is it possible to see code coverage statistics? I mean is it possible to see code and to see code coverage stats for it? Oh you mean show the source code? Yes. Yeah you can see the source code with the college information in this level. Oh that's cool. So it was one of the questions from the previous demos. And yeah, Shin Yu has implemented it maybe in two days after the meetup. Yeah, thanks for doing that. So let's proceed to other questions. So Jason was asking about an usual format of API for trends. So Jason maybe you would like to ask a question directly in the chat. Yeah, this is on the call today. Yeah, great question. Okay, about trend APIs. Oh right, yeah so well normally in Jenkins you have different model objects and then they have an API get API method and then the API object serves all of the export API. And I just saw a strange URL pattern here that I never seen in Jenkins. So I was curious about that. Oh yeah, so in Jenkins we can get an object by URL. So API actually is not a normally API. It is an object and I will show the code for you. Yeah, wait for a minute. So as you can see API actually is not returned API object. It will return the rest for coverage wrapper. And in this class we have API returned. So we can get a rest for coverage wrapper by visiting the API and also get real API information by source method. I suggest to make this consistent with everything else in Jenkins. I suggest refactoring this so that the first class, so that coverage result has an API get API method and then it also has a coverage trend tree get trend method and then coverage trend tree has a API get API method. That would be consistent with usual usage in Jenkins so that each model has a slash API slash XML or something like that. Oh, I will look at it. Yeah, it will look better in the UI also. There's reasons to do that with the automatic API link that you get at the bottom of pages and that sort of thing. Yeah, you're right. I will create a ticket for it. Thank you. Okay, JC, you had another question, right? About levels of information. Right, so yeah, it's been a long time since I looked at it but I sort of remember that maybe it was Chococo could track coverage of an if-then statement on a single line of source code and Cobertura could not do that or maybe it was Emma or some other, sorry, Clover or something else didn't track it. So I was wondering if the API has a way of handling tools that have different levels of detail. Some tools may keep coverage per line number. Some tools may keep coverage with more level of detail. Anything like that? Or you may add a search code, like if the coverage of a class is lower than some number, so like that? No, no, it's more that if I remember some, here I'm going to type an example here, that some tools are able to have finer levels of detail. Yeah, so maybe you could show how the code is organized in terms of multi-language support because it seems to be closer. Oh yeah, so for multi-language support? Yes, related to multi-language support. So for now, I have a multi-language support, as you can see, it will also put LLVM and Java. So we have both LLVM functions and Java class, and Java matters in the same aggregated report, but it is still a little back, I will fix them later. Yeah, we have a collateral report. Yeah, that wasn't actually my question. Oh, I'm trying to get work in Gitter, and I'm not having any success with Gitter today for some reason. So I guess it's your question about, like, if one of them reports things like branches and the other ones don't? So I think- If you look at Gitter chat, sorry, I finally got it to display properly. It took me some time. Oh, okay, so yeah. Some tools will be able to say that do something else has been covered by your tests, but do something has not, for example, because rare condition was never true during your test code. Yeah, okay, I will look at it after meeting. Maybe a feature of Jococo or something, I don't remember. Yeah, I also don't remember that. Actually, maybe it makes sense to talk about how plugin developers benefit, and which features they get if they integrate with the plugin. Because yeah, since JC references these conditions, it seems to be one of the topic for generalization, like we did for thresholds and added multiple options to a Kuberstura plugin. So maybe it makes sense to talk a bit about such features available. Oh, yeah. Yeah, so if we integrate a multiple- Wait a minute, let's ask. You mean we can specify different thresholds like that if we integrate a multiple-language-tool, we can specify it separately? Yes, something like that. Actually, what I wanted to say is that since CodeCoverage API is an intention to offer reusable features and reusable functionality, it means that every plugin integrating already gets some edit value. And the idea that this CodeCoverage API will be not only API, but some edit functionality provider. So for example, once you integrate a Kuberstura plugin with CodeCoverage API, you get Trent API, you get these thresholds, I mean global ones, and you can also add things like, for example, parallel publishing for Pipeline once it's ready. So what I wanted to say from my point of view, that this plugin allows offering such features and make them reusable. And for example, and for example, Shen Yu has implemented a new plugin for LLVM. It's not a new plugin so far, but previously we didn't have LLVM support in Jenkins at all. Any user who was using LLVM CodeCoverage, they had to convert the data to Kuberstura or whatever other format as a separate tool. And now it's possible to just write the implementation of few extension points and get it working. Oh, yeah. So for example, we have LLVM support. So we can separate it from our plugin and make it as a new plugin like that. So user can just add our plugin dependence and just implement our LFAT to use our new feature like that. Oh, yeah. Maybe I'm not understanding. Yeah. Sorry if I confused you a bit. So thanks for the presentation. I see no other questions so far. So maybe Jeff and Steve and Supoon would like to say something about the project. Yes. So I guess I could go first. So this is something that the Jenkins community and particularly people that work with CodeCoverage plugins in general have been looking forward to just kind of finding a way of having a common way of generating reports. And as a user of Jenkins, we spend a lot of time looking at CodeCoverage. And we have lots of different types of things that we build, JavaScript, native, mobile, et cetera, with different report types. So we're pretty excited to use the results of this. Yes, thank you. Yeah, I agree also. It's coming along perfectly. And I'm kind of excited to see the end results as well. It's not the end results. We still have one month ahead. So yeah. We're almost there. Yeah, right. So if anybody wants to provide feedback, it's a good time. Yeah, I think he's doing a great job. So yeah, I'll be waiting for the end results. Me too. So yeah, thanks a lot for this project. Maybe last question from JC before we move on. What is the storage format when using the new API? Still using the binary formats or a new format? Or using a simple file XML format or for collateral report. Yeah, remove some. Yeah. So you're taking the Cobra, for example, the Chococo, I think, has some binary format that it defines. So you're parsing that, throwing away that file, and then saving an XML file using the data you defined. Right. Yeah, like that. Yeah. Just parsing as a format report into the simple file, simplified a collateral report. So is that parsing done on the agent or on the master? Oh, I think it is done on the... It is dependent or where the report is, I think. Right. Yeah, I'm saying if you ran a build on an agent and say, Chococo, report outputted on the agent, do you have the agent parse it and send back your generic object structure to the master? Or does the agent send the Chococo binary file to the master and then the master parses? Oh, yeah, I understand. It will send the file. Okay. So it's master side parsing. Okay. It would be possible to implement the agent side parsing later, but yeah, both approaches have their own advantages and disadvantages. Actually, it's not that difficult to implement another implementation if we need. Thank you. Okay. So thanks for the presentation. Let's press it with the next one. Wuton will present a remote over Kafka project. Wuton, are you ready? Yes, I'm ready. Let me see my screen. Okay. Can you see my screen? Yes. Okay. So hello, everyone. This is a second phase evaluation for Google Summer Code. My name is Wuton and my project is Gentines Remoting Over Messages, Mass and Q. So first is some quick introduction. So I'm a computer science student from Singapore and working together with me in this project. I have Oleg and Supoon as a G-sub mentor. And also I have the support from Jeff and David. They are developer from Remoting Project. So what's this project about? So we have the current version of Gentines Remoting. We use PCP for direct master and agent communication. So they are accepting the problem with these JNLP protocols. First is if the connection or agent fails. We have the build fails and we have to start again. They are also the issue with the traffic prioritization and multi-agent communication. And this impact the instability and scalability. So the goal of this project is we want to make use of the message mass technology and we decided to use Kafka under development as a plugin to add a file tolerant communication layer in Gentines. So this is an overview design what we have in the first phase. So the project consists of multiple components. We have the master side with plugin. We have the Kafka global configuration where we can specify the host and port of the Kafka broker. And also we have a Kafka computer launcher which uses a custom command transport which creates producer and consumer instance to do the communication. From the agent side we also have a custom engine. We also use the new command transport. And also we bundle the remote engine from the agent side. So now the communication we not using direct TCP from us direct TCP between master and agent. But instead on the communication is done over Kafka. So the summary of the first phase we set up the local components. Then we do a POC for the command transport implementation. So in order to do this we have to pass the remote thing and the call to make things easy and extensible. And we decided to use Kafka as a final implementation for this project. So you can find more information about this the first thing through my blog post. So for the second phase we want to improve the security and reliability of the current plugin. So we will support the we decide plan to support the security for master and agent connection with authentication and authorization. As well as the secret exchange like we have something similar and in JNLP. Also we want to improve the existing producer and consumer model to ensure reliability. There are bugs in phase one and we continue fixing it in the second phase. And we want to in this way we plan to release to receive the feedback from the communities. So this is our updated design for the second phase. So we support the Kafka with security enabled. We can secure this by enable SSL. So from the master side where we conflict the Kafka. We integrate with credential plugin to let users to enter the credential. The username and password to connect to Kafka. And so there's another component in yeah so my mistake so this would be inside. So so this Kafka secret manager is in charge of in charge of exchanging secret with agent. So in order to do that from the agent side we have a client listener which is run in a separate thread to listen to secret initial message from master. So all the communication now is done through a secure channel. Over Kafka so there's also an improvement for the producer and consumer model implementation. So be previously we use dynamic partition assignment. But for this phase we decide to move to manual passion partition assignment for Kafka producer and consumer. So when this implementation happened the benefits we reduce the number of topics for each condition and also the consumer group need to be used. So next for the next part of the presentation I will show a demo for the plugin. So as we continue from the last way we can conflict the Kafka things from from global configuration if we install the plugin. Here we have an option to enable or disable the SSL with credential. So first I will demo with enable SSL option. So we can create a new agent here and choose with Kafka option. So if we click on this enable SSL we need to provide the username and the security option. So if we go here we can see the script to run the agent from a Java. We have the secret parameter which is it has using hmark algorithm and if we copy this script to run the jar we need to enter the password to connect to master to Kafka and master. So upon connection we have the agent is connected to master. We can see the command transport I mean all the command things running from the terminal and we have the agent connected successfully. So this is a new feature with the security enable. We can also use connect to Kafka cluster with our security. So I will delete this here and then I change the configuration to not using SSL. Then if we we're not using SSL we have the flag here so that this is a non-secure cluster simple we copy and paste and we have the master and agent connection over Kafka. Okay so for the next part of the demo I will I will so run I will try to run a sample pipeline over this agent. So basically I have a pipeline here a sample pipeline here which take which run run the pipeline of this sample project so I can try to run it and we see how okay so if I start running it you can see the command transport the command is sent from the log here and see the console output is running in the agent side which is a lot of command transport and also we have I also integrate the this too which is used for administrator to monitor Kafka where we can see the graph and things the numbers like that and I think this might be useful in the future yeah so you can find the use of the tool in local compose I put it in local compose for this project uh yeah so we have the pipeline finished and is successful uh yeah and we can see all the components from the uh so this is the end of my demo and I will continue with the presentation uh so yeah so for the current status we finish the security support we also finish the improvement of producer consumer model we release two unprivileged and under experimental update center and so there's a minor feature is the support of Cloud Cloud Manager in local compose for backfishing is the continuous progress and we will continue is we will continue it for the third phase yeah so for the next way plan is we want to our main objective is to release the 1.0 version because it's the last day of rule or summer code so to do that we want to do a lot of packaging work and also the test automation for the plugin and then if time permits we can try to investigate on the chunking capabilities for catch channel as well as the consumer pooling or nl options uh so this is a useful link for my project you can see the jitter or jithub repository as well as my proposed about introduction introduction proposed and unfair release proposed uh so this is the end of my presentation and time for qna yeah thanks gil for listening yeah so jesse had some questions about the connection logic so maybe we could start from them jesse yeah i just wanted to know a little bit more about the management of agents so at least with with currently released launchers there's sort of two different styles so one style is the so-called jnlp launchers usually not actually using jnlp where the agent is responsible for connecting to the master um and it expects that the master already has a node defined with that name the agent passes some secret and if the agent jm dies for whatever reason then the agent computer has to restart it by running some kind of service or something like that and then the other style that the other launchers use is that the master runs some command which causes the agent to have a virtual machine to be started remotely using ssh or something like that and those and you would choose one or the other depending on how you want it to manage nodes so that's especially important if you have elastic cloud systems so if you're trying to run builds on uc2 for example then you really want to know what's the easiest way to you know get everything up and running from scratch on a new uc2 computer so it's just wondering how that's managed when you're using the kafka computer launcher oh okay so i i see the question so actually the i see the last question is about the recognition actually this we haven't been test yeah i actually i haven't been test testing it like the agent die in the middle of buildings or correct but uh we have uh so uh how do you say because we have the agent is running and it has a separate thread it had a separate thread for receiving the secret the initial secret exchange so it means if the master try to connect if the master try to connect it will i mean it will respond again but yeah i haven't tested the things like feeling maybe we can try to do it in the first phase maybe you could show the current connection logic so it may answer some questions to jc yeah so as the listener for the kafka connection that's something that's running inside the agent jbm yeah we okay okay i can show the uh for the agent right yeah so for the agent is first is we i i create a new thread for the can see my screen yeah so we create a new thread to listen to for for secret so whenever the agent receives the secret it will acknowledge and start and will start the engine it's a custom engine uh so the custom engine will create the command transport and it will connect to kafka it will start a consumer and producer and connect to kafka to listen for message is answering the question i think so so as far as i can tell it from the perspective of a jenkins administrator it works exactly like the nlp type page of this today you have to start some you start some java process on the computer you give the jenkins url and some authentication that has to correspond to an existing node definition on the jenkins master and then at some point soon after that that connection is made and it becomes an agent and if that jbm stops for whatever reason then you have to start it again from the it's a bit different um so actually both agent and master they connect to kafka but if you talk about the jenkins remote connection it's been initiated by master so it works similarly to ssh slaves plugin so master initiates the connection but yeah the assumption that both master and agent are connected to kafka so master sends a connection request to agent then agent verifies that everything is fine sends a response and then master verifies that secrets are fine etc and then it establishes connection but yeah they think that master still has all monitoring logic so pink thread and all other monitors so master is able to really initiate the connection and once master initiates the connection as v2an said the connection pool should be initialized so i think that the implementation is different from jnlp in terms of implementation and it should be more uh reliable uh one one point is because we use kafka so all the messages are stored there so it means the the features of recovering after fill i i haven't tested that yet but then we have a uh a bus in the train right so we can we consume the message it will restart the agent again we will try to see it in the first phase if we use dcp the things are lost but now we have a middle storage so yeah so yeah if you use kafka in standard implementation i mean without storage persistency then of course if kafka days the messages are getting closed but for the rest the connection works fine so i spend some time on testing that and for example if you bring down kafka and restart it master connects if you bring down the agent and restart in some cases it can even continue working natively if there was no monitor interruptions so it means that if network breaks between agent and kafka then the connection can be restored and if you make kafka fault tolerant for example using message persistency using high availability of kafka clusters then you can say that communication layer results fault tolerant so we need to experiment with that but from architectural standpoint it should be possible even now right yeah you can experiment with things like using low level linux network commands to simulate having you know to simulate a network complete disconnection or to simulate network hangs or something like that or theory can even do simulations of packets being sent in one direction but not the other and that sort of stuff yeah we have some test automation planned for the next phase and yeah of course non-success pass tests for network disconnects will be in focus because it's what it's difficult to heart it sorry it's difficult to test it reliably when you have no test automation is there is there any plan to integrate this into any cloud provider plugins because basically anything anything that defines an elastic cloud in Jenkins has to know something about the launcher in order to make a connection to a newly created computer so it would need to be explicitly integrated with important cloud plugins in order to be useful yeah so we have a ticket i think we have a ticket for for integrate with the cloud api property but we haven't we haven't started hooking on that yet so not sure it can be done on test phase so we have this ticket for the cloud api implementation yeah so it wouldn't be a single i mean it wouldn't be a plugin integrating remoting Kafka with clouds it would be that for any cloud plugin that we have in Jenkins we would need to add a new option to use a Kafka based agent rather than a dnlp based agent so it would just be patches to patches to existing cloud plugins yes yeah so this i think could be done on multiple levels and and yes if a plugin for example docker plugin if they support custom computer launchers it can be probably done even now the only problem is to pass secrets to the agent side but without authentication i bet that we can make it work in now because no i don't think it's possible because what Jenkins or does provide interface that allows a cloud plugin to use a launcher provided by a different plugin it doesn't know about but that only works for master to agent launch methods like ssh yeah computer connector but that assumes that the master is doing the work to create the machine and set up everything including starting the jvm and initiating the connection and everything like it doesn't that doesn't really work or jnlp type launchers where the agent is connecting back to the master which is what this is not exactly because as i said before master initiates the connection so master starts the connection chain and it means that the only prerequisite is that by the time the master starts connection chain agents should be connected to Kafka right well yeah master may be sending some messages to Kafka that's fine but from the perspective of Jenkins it's still the same that master that the master has to assume that the agent computer has already been configured and it has this jvm running and the jvm is connected to Kafka and all of those preconditions are there right so we already all like ssh launcher where the master really does all of the work to take a generic say linux server and get everything needed for remoting torrent okay yeah maybe we could take it offline but yeah i'm pretty confident that it's possible because even now which one has created a packaging for agent in docker and when you start the package the agent connects to Kafka so it means that if we take docker plugin with Kafka launcher it will firstly start container and container will connect to Kafka and then the launcher starts and master connects to the agent so theoretically it should work oh yeah yeah it can be made to work i'm just saying that you can't use the computer connector api which means you need to patch every cloud plugin like docker plugins separately okay yeah i propose uh to take a phone line offline for example tomorrow we have a status meeting at the project so maybe we could discuss it a bit more there okay okay so i see no other questions in the chat and yeah thanks again jessie so just to provide some feedback from the mentor side yeah the project looks really good and we've got a good progress yeah remoting itself is a pretty complicated area because you need to study a lot of things and yeah that's why we decided not to implement cloud providers in the first iterations because it would make the project even more difficult and yeah we focused on simple launchers but even with that it was a serious ramp up and what one was really successful at studying the code base he contributed some patches to remoting itself and we already got this patches in weekly and yeah i think that the project is doing really good and after the first phase we've got one thing which was missing for productization security so now there is ssl encryption and if you want you can also play with end-to-end encryption which is also supported by kafka so yeah i think that it's a really good progress by this stage and yeah i enjoy the project i'm looking forward to see how it gets to the release and adopted in the community supon would you like to say something yeah and yeah great to see project is on track and now we have spent much time with kafka and so looking forward to see the release version 1.0 soon so yeah as you mentioned he's doing a great job so yeah okay thank you so any last minute questions i guess no so let's proceed to use the last presentation so abhishek will present pipeline as yaml or symbol pull request the job plugin so this story has two names abhishek are you ready yes i am ready okay so the floor is yours okay so are you able to see my screen yes okay hi everybody i am abhishek gotham working on gsoft project uh named simple pull request job plugin but we decided to change the name to pipeline yaml so i am a third year student from cumbersyn's background from wish wish were national institute of technology in india i'm a regular comparative programmer i have done two internships as game programmer as well and i am very interested in automation so that's what motivated me to participate in this talk project with jenkins so the problem is that jenkins pipeline is groovy dsl and it is very complicated even as even with declarative pipelines and the main aim of this project is to define a viable definitions to to for the pipeline projects okay so there are there is some prior work to this and one is travis yaml plugin it runs travis yaml yaml as jenkins pipeline job and it does not support any extra pull request and also it doesn't make sense to run any any configuration that is for another platform another one is code ship plugin and it converts steps dot yaml and services dot yaml to scripted pipeline code these yaml files are supported in code ship ship platform and yeah this plugin has never been released say third one is jenkins pipeline builder and this this does most of the work but it is a non java java based tech tool which cannot be converted into a jenkins plugin easily okay so my project objective is to build a plugin which which using which we will be able to define pipeline jobs as viable my mentors are martin aunga christin wetstone jeff canark and oleg nano ship yeah so as i told you previously that the main aim of this plugin is to define a viable file using which pipeline jobs can be configured it should also intact with bitpocket server bitpocket cloud git lab and github also it should detect some presence of some reports which are present in conventional location and publish them and it should also publish build result build status so design of this plugin we decided that this plugin will be built on top of multi branch pipeline plugin because it has a nice interface to show branches and pull requests separately it can also detect the trusted version revisions of the in the repository and automatically get them and is it also has functions to publish build status to the repository so next thing is convert we have decided to convert yml configuration file to declarative pipeline and we will be we are using jcast plugin to configure the steps and get the step object to generate a pipeline snippet or for a step okay one more thing was decided that to specify any scripts and user scripts to run users will be giving only the relative path without the extension of the script and the plugin will detect if the machine is unique then it will append .sh to the script and .sh script will be called else it will append .bat and the bat script will be called so till coding phase one we had a git push step which is used to push the changes to the target repository and build and test performed by user defined scripts called from jenkins yml file agent configuration was also there in jenkins yml file and we were able to have j unit reports find bugs reports and save artifacts from the yml file and a basic interface for converting yml file to pipeline home was available so current features we have a step configurator which is based on jcast plugin this is used to get the step configurations from the yml file and then convert it to the pipeline script but it has some limitations what i i found two limitations they are that it has difficulty with nm parameters but it is resolved recently and it will be fixed in next release of jcast plugin next thing was it has difficulty with some classes like diratest data publisher.class and it is not been fixed till now okay so a specific format for jenkins yml file has been designed and details about that how what are the significant section and how to construct the job using yml file will be there in the blog post blog post is not ready till now it will be in tomorrow okay tools property in agent section of yml file has been added so that agent will be responsible for the overall configuration of the machine some tests for the plugin also has written and i was using manual indentation and manual tabs to generate the pipeline code and that was removed in this phase so the step code generation process is like this is we have a j unit step here okay sorry we have a j unit step here and user wants to do all this thing so we will parse this yml and then pass it to jcast configurator then we will get a step object which we will pass to snippet uh snippet code generator and we will get a snippet for the step that has been defined by the user you can see here example of yml file okay so you can see here that we have an agent agent section here and we have tool section here we can define tools and the type of the agent we want then we have a configuration section in which we have push pr on success variable so if this will be true then the then the changes will be pushed to target branch if the build will be successful and all tests will are passed we have another thing in configuration section this is a trusted trusted users which will be approving the prs and we should start a build one of them will approve the pr then we have a remote sorry we have a environment section in this we can users will be able to define any environment variables and the credentials they are going to use in their scripts then we have stage section in which users can will be able to define all the stages that they want and how they want the build to run also if user wants only a one stage only a simple stage then we have an option of we have a option of one section it's called steps and users users can just give a list of steps to run it and the name of the of this stage will be built in the Jenkins UI plugin configuration okay so how should we configure the configure so that we can use this plugin so we need to define we need to create a party branch pipeline project yeah and we can get a branch source get get bit but get get have whatever and we can give credentials owner depository and the code request behavior we want and then in build configuration we have to choose by Jenkins file by yml and save the save the save the job and we will be able to use the plugin okay and okay for the demo i have i have a job already be configured and i have built this pr pr number three and the yml file for this is yeah this one we have we have configured agent as any and we have three stages in it first and we have a step called a search will which will call us script and in build and in test stage also we have test and build script and we have our private effects that so that we can archive the artifacts so you can see here this pipeline for has been generated and the build was successful okay so plan for phase three is to test multi branch pipeline features such as webhook support we want to include webhook support also here so that whenever pr will be created and approved it the build starts automatically and and the part in which users will be able to give some trusted usernames so that those if those people will approve the pull request then only the build will start has not been implemented yet so that is also one thing then we should we want to finalize documentation for this and release the plugin and after the release we want to support when when declarative pipeline declarative uh directive and we want to support hierarchical report types we have a report section in in the Jenkins final file uh which only supports xml xml reports for now but we want all the reports to be published by that section only so that is another thing and some tests unit test and ats test needs to be written and automatic workspace cleanup when pr will be closed this feature we want we also want to support and one more thing is we want to refractors snippet generator uh that it's a function to generate the pipeline snippets and we want it to refractor to extensions it's yeah so these are the links for data chat and demo project tree or the project repository and yeah thank you and time for questions okay thank you Abhishek so Jesse raised a lot of questions about the implementation in the chat so Jesse maybe you would like to ask some of them um yeah not so much about implementation just about the goals of the project I guess that there seems to be there seems to seems to be a lot of different features here and some of them overlap with other stuff that's that happens in other plugins or could happen in other plugins so I just wasn't sure what the overall goal for this was and how it would mix with this mix with other plugin features that get developed in the future I think that's what most of my comments were about I guess maybe this is more a question for Martin that's a good question Jesse um at the start this plugin the initial idea is to is is is to give the user the ability to have a very very short description of um configuring a pull request I guess configuring a pull request job and doing this with the YAML file where the user would simply declare the very high level I guess configuration options of of of running a pull week of building a pull request that seemed like something interesting to have because I mean I guess he's asking about sort of scope scope thing it's like um like a pull request approvers feature so I mean that's something that that really applies equally to any kind of multi-branch project so it's not I mean it's not really about this particular Jenkins file YAML format really right so that's sort of a feature of the of the branch source the git integration something like that potentially maybe last martin maybe sorry good dropped for the call for a second so regarding potential usages so we are Jesse is right that the features may be available in a multi-branch pipeline or in other such plugins actually we try to reach out to plugin developers to get them involved in community bonding so they were not available in this time and our discussion our decision was that we try to try to implement an engine using multi-branch pipeline and offering features from there because declarative pipeline was one of the possible engines we could like we could reuse so we decided that we start implementation from there and then if people join if they want to contribute or to generate the solution it's possible so once multi-branch pipeline supports this feature yeah the only thing Abishak would need to do is to just replace the current custom implementation by one offered by multi-branch and that's it yeah I guess everybody has some connection issues from the mentors and students does it answer your question by the way yeah that's okay thank you yeah so do we have any other questions key not in the chat from what I can tell okay so yeah while martin is trying to reconnect Kristen maybe you would like to summarize the project sure so this last period was really exciting there is the alpha release of the plugins so actually if we all find it interesting or want to poke around you can go to the experimental update center and install it and try you know using it to see what the features are versus yaml you just released alpha 2 so that's been really good and I guess the more we find out what people are doing with it the more useful this can become because you know there's only five of us here so it's just whatever we're thinking would be useful for people and I think Abishak's done a great job getting that out and getting the yaml all syntax all figured out so good job thank you Kristen martin yes I missed part of the conversation but I think the most important part during this phase was to stabilize the definition for the yaml the yaml file and this enables a project to move forward so I want to thank Abishak for his work on that thank you martin and as Kristen said the project is available in update center actually each project we share we presented today all of them are available in experimental update centers so if you go to plugin repositories you can find information how to launch the project how to try them and there goes announcement blog posts which described this procedure so if you want to try any of the presented projects it's all really possible and yet for the next phase one of the main goals will be to get them in the main update centers so it will be available in public okay do we have any outstanding questions I've got one question from Uli Hafner about code coverage API plugin so he asked about source code browsing and he asked whether it should be a part of custom implementation or is it a part of code coverage API so maybe Yushin you could would you please respond to this question yeah it is a part of our code coverage code coverage API implementation yeah and he also asked whether it would be possible to generalize this engine because for example Uli is a maintainer of warnings plugin and he would be also interested to somehow reuse this engine in this plugin if possible at least it's how I've got his question in IRC so Uli has a warnings plugin it's a plugin which shows particular warnings for the code so there are multiple scanners and it would be interesting to somehow be able to show these warnings directly in the code so if I understand his question correctly he would be interested to for example use the engine from your plugin in order to show a warning in a particular code to align oh yeah I think we can we can do it because the source code result is an extension point yeah maybe it could be possible yeah but but I needed to adjust my code yeah right maybe and my follow-up to that was actually about a feature of github so currently github supports displaying warnings in the code directory so somebody can submit a code result to the pull request so for example if we take your plugin and abyshek's plugin it would be great if when somebody submits coverage and starts it would be useful to get some comments etc to the pull request for example for example when the code is not covered to solve so would be such integration possible uninteresting to you oh I think it is possible yeah yeah check apis jc mentioned yeah okay I will take it for it yeah right thank you for that yeah thank you awesome okay thank you too so I guess there is no other questions from chas does anybody want to ask something I guess no then thanks everybody and thank you abyshek crushing you and wutuan for the presentations yeah and yeah thanks a lot for all your work so we are looking towards the next iterations in order to see how the project evolved okay and of course thanks to everybody who watched this Jenkins online meetup if you want to get more information about the project um yeah I'll just show some links uh do you see my screen yeah okay yeah so you go to Jenkins are your projects uh result you cannot find all projects here and here you can also find all information so for example if you want to ask more about simple pull request job presented by abyshek you can scroll and here you may find chat each of our projects has its own channel so you can just join there and ask any specific question you have oh yeah it's me there then yeah there is github repository and jen and other resources so Jenkins are your projects should be a landing page if you have any questions and yeah as we said during the first evaluation each student had a blog post so for example here's abyshek's blog post okay um yeah it seems the link is broken okay so let's try to find it here okay yeah pipeline is yaml I think it was missing in the end or so okay so here you may see the description and you can also find the guidelines how to run projects and such a blog post are available for each project so if you're interested just use abyss materials and you will find examples and find guidelines so you can start trying projects even now okay that's what I wanted to share in the end and of course we will be publishing all slides and we will be publishing this video so you will be able to watch it later so thanks again everybody thanks mentors thanks students I'm going to stop the broadcast in a few seconds okay