 Okay, we are live. Hello, everyone. Welcome to the Jenkins online meetup. Today we have final presentation by Jenkins Google Summer of Code students. And yeah, we will have several demos. So they will present the results of their work. Let me share my screen. And do you see it? Okay. So yeah, I have a short slide deck as always. So just in case you have never participated in Google Summer of Code, it's a wide initiative. There are thousands of students and hundreds of open source projects participating each year. And in 2018, Jenkins also participates in the project. We have three ongoing projects this year. These projects are core coverage, API machine new rank, then remote in Kovacavka by Wuton, and a simple pull request job plugin by Abishak. All these projects reached the last stage. So we have final evaluations starting in JSOC tomorrow. But yeah, all projects reached this stage. So we will be presenting the final version. Actually, JSOC for us started in February. The public part of this community bonding started in April. So it's almost five months ago. And by this time, there were three months of coding and also one month of community bonding where teams worked on designs and on getting introduced to the community. So all these stages are passed. We didn't expect major changes in the projects. And all projects have been already released as release candidates, one of the zero versions or at least alpha releases. So today, if you want to ask any questions during the presentation, we have several channels. The recommended channel is Jenkins, Google Summer of Code channel in Gitter. It's Jenkins.ci slash JSOC dash C. If you want to join the call and ask questions, there is also a participant link, but you can just join the chat and ask questions here. Then we will pass these questions to the students during the presentation. And if you want to ask questions offline, you can just join a project chats. For example, there is a chat for simple pull request, job plugin, there is a chat for remote and project. And you can find the references to this and other channels on Jenkins.io. So on Jenkins.io projects, there is a JSOC project and it links all other products. For example, you can go to code coverage API plugin and here in the bottom, you can find all the links. Yeah, here, chat, GitHub repository, et cetera. So that's all within productions. We will start from our project presentations. So before the call will be agreed that Abhishek will be presenting first, then Bhutan and then Shenyum. So this is the order of presentations. And let me give a word to Abhishek so that he starts his presentation. Abhishek, are you ready? Yes. Okay, let me share my screen. Okay. Can you see my screen? Yes, we can. So hi everyone, I am Abhishek Gautam. And I am a finalist, now I am a finalist student of, from BNIT, which is Research Institute of Technology. It's in India. And little bit about me, I'm a regular comparative programmer and I am also interested in game programming and also in automation that motivated me to participate in GSOC with Jenkins. Okay. So at this point of time, there is no viable support for defining jobs in Jenkins. So my project was to define a viable syntax or format so that you guys will be able to define some jobs and build them. So we have Groovy DSL in Jenkins, pipeline, but it is very complicated and it also complicated with declarative pipeline. So YML definitions can be used to simplify that. Okay. So there is some prior work. First one is Travis YML plugin that uses dot Travis YML file, which is used by Travis platform to build the repositories. But the thing is that it doesn't make sense to use the settings of a different platform in Jenkins. So, and also it doesn't support external requests and the last commit for this plugin was on 14 November 2016. Okay. Second is code ship plugin. It also converts step dot YML and services dot YML that is used by a code ship platform to convert it to scripted pipeline code. It also has the same problem that it doesn't make sense to use the settings of different platform in Jenkins. And the plugin also has not been released till now. And third is Jenkins pipeline builder. It's an external non-java based tool, which is cannot convert it to a Jenkins plugin easily. It does pretty much everything and supports full request and also it sends commit reports to everything. But the problem is that we cannot convert it to Jenkins plugin easily. Okay. So my project is to create a new plugin, which will be used to define jobs in YML. My mentors are Martin, Christine, Jeff, Kaner and Oleg Nana Shiv. Okay. So the objectives of this project are to define a YML file so that it can, the job can be configurable using that. And it should interact with Bitbucket server, Bitbucket, GitLab and different, all different platforms. And this plugin should also detect the presence of certain types of reports, which are present at convention location and then publish them automatically. And if they are not present at conventional location, then user should be able to provide the location of the reports and yeah, to publish build status. Okay. So we decided to build the plugin on the top of multi branch pipeline. So because it has a nice interface to show builds for cool requests and the branches separately, it can also detect trusted revision in a repository and it also can publish build status to the repository. So we will get these three things from the multi branch pipeline. Okay. And then we decided to convert the YML configuration to declarative pipeline. And in that, the steps which will be defined in YML will be handed by Jenkins configuration as code plugin. So in coding phase one, I have implemented Git push step and Jenkins YML. In coding phase one, we write code to, we wrote code to use Jenkins file.YML to build the project or repository. And then I implemented a Git push step, built in and then test performed by user. Okay. So we use Jenkins YML to build the project. YML to build the project. Then we had agent configuration in that and I implemented these three things, which is to harvest reports using JUnit, to harvest reports for find bugs and archives artifacts section. And I implemented a basic interface so that the YML configuration can be converted to declarative pipeline. Here's the link of block post one for phase one. Then in coding phase two, I implemented step configurators so that we can convert steps which are defined in YML to the declarative pipeline. It had, I found two difficulties in that. The JCAST plugin was having difficulty with NMS and it was dissolved in PR by JCAST plugin developers. And one more difficulty was, it was having difficulty with some classes that are nested with other objects. So this is not resolved till now as far as I know. And then we defined a project in proper format so that the Jenkins file or YML can be written and users can use it. And yeah, I coded the tool property also in agent section of Jenkins YML file and I wrote some tests then, yeah. So about the step code generation, what we do is users will define a step, let's say it's JUnit step like this and we will pass it to, we will parse it and then pass it to JCAST contributor which will return us a step object. Then we will use snippet generator that is already a part of Jenkins and that will give us the declarative pipeline code. And we can just place it at the appropriate location and yeah, we got this thing. In phase three, I wasn't able to do much work because my call started and the placement was going on. So what I did was I attached the pipeline code generator to extensions so that in future, plugin developers can contribute to the generators which I have defined and I wrote some unit test and also I done some documentation work. So one example of YML file is here, if you can see my screen. This is a dummy project which I am using to test the plugin. In here, we have a agent which is defined as any and we have stages named as first build and test stage and they are calling a script which is present in the repository itself. And then in configuration section, we have our archive artifact section. We can give the path to any artifact which we want to archive, okay. And yeah, if you want to have a look at the YML examples, then we have them in here. This is the repository we have and in that we have YML examples and yeah, here you can find everything which you need to define any configuration in Jenkins.file.band. Next is plugin configuration, okay. So how should we, when we install the plugin, then how should we use it? So for that we need to create multi-branch pipeline project and then in that, we need to go down to build configurations and there is a more option here. We need to choose by Jenkins.file.yML and then in branch sources, if our project is on GitHub, then we need to add GitHub, GitHub branch source and then we can specify owner credential and the repository here and yeah, then we have to go, okay. So after we save this configuration, then a building that's the plugin will start a scan to the repository and it will try to detect all the branches and the pull requests and yeah, it will automatically start a build for all the newly found branches and the pull request. You can see here, I have already started, I have already finished a build. You can see here that the pipeline channel is this. In this we have agent as any and we have stages, first build, test and artifact stage. And yeah, you can, yeah, you can see the similarity between these, this YML file and this declarative pipeline code. And this has been, this build was a success, okay. So yeah, future tasks are to release the 1.0 version of this plugin and I need to test the plugin for that forms like GitHub, Bitbucket and GitLab. Actually, I have tested it for GitHub and Bitbucket only, not for GitLab. And I want to include web support also in this plugin so that whenever a pull request is created or our commit is pushed, then a build should be automatically get started, okay. And yeah, we were thinking that there should be option that only when the pull request will be approved by a trusted user, then only a build should be started. So we have a ticket for that also. Then when a pull request is closed, then we want the workspace to be cleaned up. And then yeah, there is no when like support in Jenkins file.yml for now, which is present in declarative pipeline. So we want to incorporate that also. And we want to have hierarchical report types in this. We have only, we have a section called reports in the Jenkins file.yml, but it only support, it only works for XML files and it is just make use of JUnit step. So, but there are more types of reports. So we also, we want to support that also. And then we, I need to write acceptance test harness test for this plugin, okay. So here are some reference links. And yeah, thank you. And if there are any questions. Thanks for the presentation presentation. If you have any questions, if you have any questions, I mean like the reason I have from somebody. So I had a question about the release, which version have you been presenting the last release or the current state of the master branch? The current state of the master branch. And when do you plan to release it? We have a pull request to decide the name of the file name of the plugin and the name of the YML file. So when we settle, get settled on that, then we will, I will release the 1.0 version. It will be great to get at least alpha version for now. So that people can try features you have been presenting. Okay, so yeah, then I guess that I will release our third version, third alpha version in today or tomorrow. Okay, thank you. So are there any other questions? Yes, I have a question. This is Martin speaking. Yes, Martin. My question is regarding the publishing of other types of reports. Where, what are the next steps to support other types of reports? Like what, when the plugin development continues, where do we start looking for publishing other types of reports? What do you mean by where, like, can you repeat the question? In your presentation, you talk about supporting other types of reports and hierarchical support type. Yes. Hierarchical reports. So, how do we continue that? How does the project continue with that? Where do we start doing that? Okay, yeah. I have not researched about that till now. So yeah, I cannot comment on that. Okay. Thank you, Martin. Does anybody from mentors want to comment regarding the project state? Martin, Christian, Jeff. I, regarding the project state, as Abhishek stated in his presentation, phase three, the current state of phase three, we can only do probably alpha release for now. And in the current state in phase three, the main activity was to detach the, the snippet, sorry, it was to detach the code generator to extensions. So the higher level features would be for a future phase of this project. So this project could continue to be developed in the future. Okay. Thank you. So, if there is no other, Jeff, do you want to say something? Sure. So, yeah, to align with what was already presented, phase three, the invitation was a little bit limited to the Abhishek's time, the project itself had the goal of creating a simple way of building full requests based on YAML. And I think throughout the course of the project, the general goal was accomplished. We're now left with a lot of interesting work to make this project really a success into the community. But I think we, with Abhishek's work he got a nice piece of code that will get us started with having something that can be easily used for building projects with YAML. Thank you, Jeff. Okay. There was no other questions in the Gitter Chat. So I propose to move to the second presentation. Uthuan will present his work about Jenkins remote in Kovacovka. So Uthuan, are you ready? Yes, I'm ready. So, yeah, let me save my screen. Okay. Can you see my screen, right? Can you see my screen? Yes, there we come. Oh, okay. So hello, everyone. So this is my presentation for the third phase of Kovacovka. My project is Jenkins remoteing over message bus and queue. So the first introduction is I'm studying, I mean, I study computer science from Singapore and working together with me in this project, I have OLAG and so on as my mentors. And so Jeff and Davies have the support from Jeff and Davies from remoteing projects. So the, I have an overview of the project. So we have the Jenkins remoteing videos, TCP for master and agent communication. So they are assisting the problem with these protocols. First is if the connection or agent fails, we have to build, we have to build fails and we have to start things again. Another issue is we have the issue with Shafiq prioritization and multi-agent communication. This impact Jenkins stability and scalability. So the project is we developed a plugin which use Kafka, we decided to use Kafka to update, to add a fault-tolerant communication layer in Jenkins. So why do we decide to use Kafka? So first is we think about using the traditional message queue such as we have rabbit MQ and active MQ. But then in the community building period, we discuss and see that Kafka may have more suitable features for the project. First is Kafka is not a queue, they use the commit log. And also Kafka is support, more support of data streaming which rabbit MQ is like that. And also Kafka is said to have a better support and from the development community. So going through the high-level architecture of the bug me, actually in order to run the project, we have multiple components. So as you see from here, we have three main components. We have master side, agent side and owner connected by Kafka. So previously master and agent are connecting and are connecting by direct TCP communication. But with the support of the plugin, we all the comments are done through Kafka and Kafka is we support and captioned by SSR. So from the master side, we have the plugin where we have the configuration where you can post and post of Kafka. And so we have add in the support, use create and show plugin to add in support for the configuration where user can store the secrets of secrets to connect to Kafka. Another important component is the launcher. So the launcher is basically wrapping the classic of Manchester. So the classic of Manchester is what we extend from common transfer of Jenkins core, I mean a promoting core, which create producer and consumer instant to connect to Kafka. And also another important part is the script manager to ensure the connection security between master and agent. Moving to agent side, we have a custom engine also using create a channel with using the classic of Manchester and client listener, which in charge of the security connection between master and agent also. And agent is mandolin removing jar, which we plan to remove it in the future. So, okay, so an overview for a set of features we are having now in the plugin. We have global configuration, which supports the client show plugin. We had M2N encryption using the SSL feature in Kafka. We have a launcher from master side to launch the agent with Kafka. From the agent side, we have a custom jar to run the agent from CLI. So with all of this setup, the plugin provides a new way for us to run job or by line with Jenkins over Kafka. So for the next part of my presentation, I will show a live demo of what my plugins have. So my plugin, I set up a ready to fly demo, which use local compose. So the demo will start a pre-config master and agent and they connect automatically using a Kafka launcher. So to do this, I use Ruby script and configuration as code plugin. So Kafka here is secure and educated with SSL. And so there are a few demo jobs so that I can later I can show you how to launch from the agent. So going through the demo, we have, so if you want to find information about this demo, you can go to the GitHub repository, where did I write the instruction how to run it from the local machine and it's update every time with the newest version of the plugin. So the demo, I use my file to run the demo and we have local compose Jamo of configuration as code as well as Ruby script. So, okay. So coming back to the demo, we have, yeah, we have the Jenkins entertainer here with the agent automatically connect using Kafka agent. And if you go to the configuration from the global configuration, yeah. So, yeah, so we can see the plugin. If you install the plugin, we have the support to conflict the conflict, the Kafka condition string here. And also the credential to ensure the security things. Coming back to the agent. So there's a option to launch agent with Kafka here whether you can choose using SSA or not, yeah, defense. And there are a few jobs demo job here. So I will try to run this job. So this job basically, yeah, this job is basically it's a pin to Google. Then, yeah, I will try to run it and the thing is if we launch this job, it will execute it. All the convention sort are sent through Kafka, send and receive from Kafka. And we can run it remotely with things. So, yeah, so the job is successfully finished. Yeah, so, yeah, honestly, if you want to reproduce this demo, you can try here. So actually there are some people already try this and give me some feedback for the project already. Yeah, so coming back for the presentation. Yeah, this over, this is quite over-applicated. So this is what I just saw you. We have the global Kafka configuration. We have the option to run the agent with Kafka launcher. And so we have, so this not so in the demo. So whenever you start an agent, you have, you are provided with a connection script to connect your agent SSA from CLI. And also, yeah, this is a running of jobs and byline with Kafka agent. So, yeah, this is a summary of the coding phase. So we, the coding start from May and we go in through three phase. So for the first phase is the most important one is, is we try to create command transfer implementation to support Kafka. So in order to do this, we have to patch in remote thing and junking score to make the thing accessible. So after the things working, then we decide to use Kafka as a final implementation for the project. For the second phase, it's more about security where we support authentication and authorization with Kafka, as well as script mechanism. There's also an improvement of producer and consumer model to ensure the reliability of what we have in the plugin. And also one important one is, this way we release an unsurpassing, which is the first release of the plugin. Moving over to the third phase is, we did, important one is the official release 1.0. So it's released under Jenkins main update center and yeah, the current version is already 1.1, which I just released yesterday. So in order to receive the feedback from community for the release, we set up the ready to fly demo in this state using configuration as code, as well as will be steep. Yeah, and then we also receive some feedback from the community through Jitter or email. And another thing is test automation, as well as documentation where I update the reporting documentation for the plugin implementation. They are bug fixing and cleaning up work in this phase also. And it's a continuous, it's a continuous work. So for the future work, so we, so after this stuff, yeah, there are some few features on the table. And one of them, for example, chunking capabilities or consumer pooling or agent recovery. So, so I'm planning, yeah, I have a plan to go to present this plugin in Jenkins work this year. So maybe if some of this feature is done from now until then, it will be great. Yeah, so this is the necessary links for the project. We have Jithub Jitter channel. You can find information about my blog post as well as the demo instruction. Yeah, thank you. And yeah, maybe we can see each other in Jenkins work this year. Thank you for the presentation. So there were some questions. So there was a question from Jaycee about the security side. Either way to specify different credentials for master and agent. So that credentials can be revoked when they're considered compromised. Ashok, any question a bit? Yeah, I'm reading it. Sorry, I did not understand. I mean, I should machine. Okay, so the first part is likely to set different credentials for master and for the agent. So that you can configure different permissions. Yeah, but okay, so the problem is, so the credential is needed for, I mean to connect producer and consumer incident to Kafka. And then the producer and consumer setup for both master and agent is the same. So I mean, yeah, I mean it's not like master and agent connect directly, but they connect to Kafka. And I mean master need to provide credentials to connect to Kafka. And then agent also need to provide credentials to connect to Kafka also. Sorry, but they can be different, right? Yeah, they can be different. Yes, we can set up different user and different secret. Yes, it can be. Yeah, so just we can set up a specific user and password. I mean username and password for master and then different username and password for agents. It depends on the system admin to do this. Mm-hmm, thank you. There was also a question from Jeff Nurek about performance benchmarks. Ben March, we, I mean... Can start from this, if you prefer. Okay, so for benchmarking, we haven't done anything yet, but yeah, we can create a ticket for this. So which performance benchmark we can consider? I mean the speed of running job or the ATT? So there is a number of performance benchmarks located in the remoting project. So it's rather about speed of events, delivery, traffic throughput and also class loading speed and other such things. But yeah, we have never launched any bench, benchmark as a part of this project because time was limited. And I think it doesn't make much sense to run performance benchmarks at least until there are some performance to be supplied. For example, Wutuan has referenced the Puyankov Kafka listeners. So I think it would be one of the prerequisites for benchmarking. I think that might be, we can create a ticket for this and see if we have the criteria, yeah. Why not? What most about that would be possible, even if... Yeah, so I think Jesse, yeah, he has a question of mode of architect. I mean the, okay, so first is the mode so the agent can only take in if he knows the secret and I mean, even without the security in Kafka he needs to know the secret which is part in as the CLI here. It's something as similar to JNLP where he needs to know the secret things. So I'm not sure it's a given level. So my proposal would be to spend more time after the main presentations so that we finish the presentations and then we have time to discuss low level things. Okay, yeah, and then this should be the failure. Actually we have a ticket for the agent recovery. Yeah, so basically this ticket is about if we, yeah, it's a proposal by, I think it's proposed by, yeah, it's proposed by Federico, I mean, last week. So basically it's like when you, when we, I mean, when we have massaging connection then we, isn't this connect, then we can recovering, yeah, I think it's possible so that I create this ticket. So because basically we have all the message in Kafka then if agent can reconnect and connect, I mean, agent can reconnect then it's possible to continue running jobs. But yeah, we haven't implemented it yet for now. Okay. So are there any other questions? I guess not for now. So yeah, I'll just briefly summarize the project from my side as mentor of this project and then we will move on to the next presentation. So, yeah, for me, as I said at the previous presentation it was a pretty complex project because it needed a lot of things to be studied and obviously creating production level, remoting over Kafka implementation what never has been added to the scope of the project. So our idea was to create an operational implementation which would be able to perform tasks which would be able to have, which would have some level of stability or failover. And I think that this task has been completely achieved. So we have forgotten operational implementation. I have an instance where this plugin is running. This stuff is up for something like two weeks and I have about a hundred job happening in Comagance every day. And so far, so good. There was no issues with the implementation. So I think that from the functional perspective the project and the most of the questions for improvement would be about performance at large-scale installations and stability improvements for each cases. So if somebody is interested to try the project and to contribute to the service it would be really interesting. And yeah, I will be happy to see if Wutuan continues working on this project after JSOC because the results are really good. A milestone point. Okay, Supoon, would you like to say something? Yeah, sure. So, yeah, during the community bonding period he did a great research for select that Kafka is the best solution for the project. So, yeah, from the beginning he did a great commitment. So, and also, yeah, we had a release version and he did a great job during this research period. And yeah, he's planning to continue his remote in projects. So it's a great work so far, so. Okay, so if we're lucky we'll have some time to work on this project together at Jenkins World. Let's see. Okay, so thanks again, Wutuan. Let's continue with the final presentation. So, Shin-Yu will present a code coverage API plugin and several other projects on the top of it. So, are you ready? Yeah, I'm ready. So, I'll show my screen. So, can you see my screen? Yes, we can. Yeah, okay, fine. So, this is the third phase evaluation for Jenkins JSOC project. And my name is Shin-Yu Wutuan and I am working on Jenkins code coverage API plugin this year. My mentors are Steven, Fupeng, Jeff, and Ulrich. So, about me, I am a third year student in computer science and technology at Hena University from China. And I like programming and reading some books and philosophy. So, regarding this project, there are a lot of plug-ins which currently implement code coverage. For example, we have a cabaret plugin, we have a cocoa plugin, we have a clover plugin, and so on. So, for those plugins, they are using similar charts and similar content. They all show line coverage of different metrics. So, for example, they will show line coverage of package A and line coverage of package B and line coverage of method C, and coverage of method D. So, they also encourage of different metrics. So, it is best if we can have an API plugin which can do the most repeated work for them. Like passing reports, like show coverage result in result page, like a set, like a process failed conditions and the thresholds. Also, it can provide, also, it will be better if it can provide an easy to implement interface for current existing project plugins to implement all plugins and also for others implement a new plugin. So, for now, all plugins support us through coverage charts. We have embedded, we have embedded support JCoCo. Also, we have other plugins as an extension point of code API plugin, like cabaret plugin and LLVM cover plugin. Now, all plugins have support those features. We have a modernized coverage chart. We have a coverage trend image. We have source code navigation. We support a parallel pipeline. We support a report combining and we have a rest API. We also have failed conditions and flexible search hold setting and other small features. So, I will show them in live demo. So, we got a modernized coverage chart. As you can see, we will show last successful, we will show coverage summary of last successful build in project page. Also, we can see, also we can see more details in build result. So, in the first level result page, we can see a coverage summary of different reports and we can click it to see more details like that. So, we can see the different coverage of different metrics. We can see the coverage of different metrics. Also, we can see the coverage of our child elements like packaging. So, if we wanted to fail to coverage and fail to coverage to only show specific coverage, we can use this handler to show the specific coverage and also it will pop off with the coverage we concerned about. And we also can see with a coverage image if we want. Also, we support a coverage trend. As you can see, we will show the coverage trend between builds like that. So, yeah. So, we have a model, we improve the coverage result page. Besides that, we also support a source code navigation. You can config it in configuration page like that in there. So, you can specify different source files during level. If you specify it to save last builder or save all builder, it will save the source files and show it with coverage information. So, we save it under builder. So, you can see the source code coverage in file level. So, for example, in there, you can see the coverage, source file with coverage information. All plugins also can be configured by pipeline. And we accept pipeline. We also support a parallel exception like that. You can call all plugins in different branch and all plugins will match them into aggregated them and show them. So, like that. So, save it in the builder now. So, as you can see, we have to specify the two. So, this is a bug. I have changed a lot of code as it is before. So, I will fix them later. So, embarrassing. So, besides the parallel exception, we also have reports combining. You can add a tag for different reports like that. And all reports, all plugins will match them and aggregate into one report. So, I forgot to build it. Oh, sorry, I used different tags. So, the same tag so that the problem can match them. Type it wrong. So, the two reports are merged into one. So, like that. Also, we have less for API. We provide less API for others. To reach our coverage data. So, you can get a coverage data by using this rest API. Also, we can get trend data. Also, we can get trend data. So, besides that, we also have flexible coverage data. You can get a flexible field condition and thresholds. You can specify the different thresholds in there. For example, if the LAN coverage is below 16, it will fail. So, this tag is unhealthy tag. So, we can save it and build it now. So, as you can see, this build is failed. And you can see the details. LAN coverage in Kibbutz is lower than 16. So, it failed the build. We have other small features. So, due to the time limited, I will not show it. So, yeah. So, in Phase 3, oh, wrong. So, in Phase 3 plan, I have mainly done those things. Support in combining ports, building a build, make college API program more generic, micro gate LLVM college into new plugin, and publishing code, fixing bugs, writing documents. This is the architecture of my plugin. So, we can configure all plugins in build configuration also under pipeline configuration. All plugins will read these configurations and find the adpads. Then pass those adpads to coverage processor and start passing process. We will first find the reports and then convince those reports by adpads passed by coverage population. And accommodate it to result. Aggregate those results into coverage result object. And process, thresholds, results also fail. Aggregate results in same build. Then save the results, add results to action, and return the coverage action to show the coverage result. So, how to implement it? So, college API plugin is very easy to implement. All you need is to convince coverage reports to all standard format. And also implement an extension point of a college API plugin. You can see the examples in coverage report and LLVM plugin. I will show it quickly for you. So, if you implement a core college tool that we provide abstract layer, it's very simple like that. All you need is one class and one XFL file. So, this XFL file will convince reports into all standard format. And this one will let API plugin to find it and convince it with XFL. So, it's very simple. But also you can implement a plugin that we do not provide abstract layer like LLVM plugin. So, if you implement a plugin that we didn't implement a coverage tool that we didn't provide the largest coverage elements like that. And also specify its order. Then implement a document converter which will convince the reporter the reporter files to our document object. For example, in there, you can want to JSON report files to our document object. Also, you need to write a very simple path for that by using the code we provide in college path of class. So, it is very simple like that. Yeah. So, this is all the things we need to do if we want to implement our plugin. So, in the future, I plan to implement more coverage tools and make the UI extensible for now. It only allows other programs to show the specific coverage information. But in some more coverage tools, they have more complex and other information to show. So, we will make the UI extensible so that others can add more chat for our plugin. And also, I will improve performance. For now, it will pass in ML file to document and read the document in memory. So, it is a memory cost. I plan to replace it to sex passing in the future. Also, for JSON passing, I plan to replace it to stream passing. There are some beautiful links. You can see more information in our project page. You can see more information in our project page. Yeah. So, thank you. Yeah, thanks for the presentation. So, are there any questions? Oh, yeah, we can start from Jayce's question, I guess. Yeah, for the last question, it didn't pass in... So, we have converted this JSON file into a document project. And we actually pass in a document object and we first read the JSON file into JSON object and convert it to document and then pass in document. So, it is... Yeah. And about... Oh, you say the branch. We have conditional coverage. It is like a branch. Yeah, we have... We have a branch data from Kebatura. Yeah. Yeah, so which types of reports do you support now in the plugin? We support... We support Gekoko, Kebatura, and LLVM coverage. Yeah. So, about Martin's question, I didn't say it. Okay, I will show it in the demo. Yeah. If you wanted to... If your report doesn't not have the packages, you need to readjust it and readjust it like that. You need to... You need to readjust it and specify it in password. Yeah. I didn't use... Why would you not just use... Oh, I used a native report format. All demo is converted to the same format. All other coverage reports is converted to the same format. So, yeah. So we can... This is why the password is like that. All demo is the same format. So, I will write a developer document for it after meeting. To display it. Display it, yeah. Yeah, so there are some data entries which are shared between formats, but... Shenyu has also added some flexibility so that you can report custom data points which are specific to particular languages. So, for example, for C++ and Java, you may have different entries in coverage reports. Yeah, yeah. Okay, fine. I will write some abstract layer for different languages. I thought you already have it. Yeah, we have our abstract layer, but it didn't support other languages. Or you can say in there. We support Java in Java Passer, and also support also registered element in there. So, we have our abstract layer for Java language, but it didn't have it for other languages. Right. But it's pretty well-aligned to what other code coverage plugins do now. So, I've been conversing with VHDL and Veriloc reports to Java code coverage for years, and it worked. Yeah, but... Of course, there could be some optimizations for other languages. So, can you repeat your question? I'm not here. So, that even if there is on the Java structure offered in the plugin now, for many use cases, it should be enough for use cases which are way outside the Java world if needed. Oh, yeah, yeah, yeah, okay. Because they have different coverage element. For example, for Java, Java have packages, have a globe, have a method, but for LLVM we didn't have it is different from Java. So, we need to specify we need to pass them according to its type. For example, we need to pass if the element is line. And also, we need to set the relative pass if the element is fail, and to do those specific actions. They have different coverage element. So, we should satisfy the actions for different case. Is per language. LLVM is per language. But I think LLVM coverage is for a series of actions like C++ and Overturn of Swift. Yeah. Okay, so what we could do is we could continue the discussion after the broadcast. So, anybody wants to discuss particular topics in detail, we can just stay on the call. I'm not sure whether it needs to be recorded in the Jenkins online meetup, but we have some slots. So, for example, JC, if you want to join, you can just jump to the call and we continue the discussion. Okay. So, thanks to for your presentations. And thanks a lot for your projects. It's nice to see that all projects have reached the state when they can be used by Jenkins community. And yeah, I hope that there will be some work around this project to continue in the next phases. So, yeah, thanks a lot for the work you did. And of course, also thanks to all mentors who help use these projects and to all other contributors who were participating in meetings and providing feedback in the meetings. It really helped to get these projects over the fence. Okay. I guess that's it for today. So, I will stop the broadcast. Thanks to everybody who was watching this YouTube channel. And if you're interested to get more information about the projects, please go to the Jenkins IO website and you can find all contacts and all project links there so that you can try the projects on your own. Okay. That's it. Thank you.