 Welcome everyone. This is the Jenkins contributor summit. It's the closing session for February 25th, 2021. Delighted that you're here with us. Thanks very much for being part of the summit. Thank you for your participation in the tracks. Thanks for to those especially who led the tracks. Oleg Nanashiv, Alyssa Tong, Kara Dela Mark, and thank you very, very much for what you've done. Very grateful for it. So, let's take on first. Today's agenda what we'd like to do is go through each of the tracks and allow each track to present each track leader to present a summary of their observations from the track comments. In the notes from the tracks, we'd like to gather into a single shared folder so that the detailed notes are still available. Right now that folder happens to be sitting in my Google Drive will likely consolidate there. First, we'll ask invite Oleg Nanashiv to discuss securing the Jenkins delivery pipeline and the experiences and observations there. Then I'll take on containers and platforms. Kara will discuss cloud native. Alyssa will discuss advocacy and events and then I'll take on documentation. So let's go ahead with Oleg I think you'll probably prefer to share your own screen for navigation purposes so I'm going to stop sharing. That is just one slide. Oh, okay, great. Okay. Yeah, so we reviewed Jenkins delivery pipelines and particular plug-in pipelines also other components like Jenkins libraries. One of the main focuses during the discussion was just two to nine created by JC which provides continuous delivery for plugins. We reviewed what steps needed there to get this job over the line, including documentation, including support final release. Discussions about version X schema about impact on gradle flow, which would need similar implementation. If you wanted to adopt it across the entire plug-in ecosystem. And the second part was about security scanning tools, what would we like to adopt in our pipelines and how to adopt that. The action plan for us is finalizing the job to to nine. Then codecurl scans which have been evaluated by Daniel. It will be great to finish that and make it available for two developers. Same for sneak. It's available but we need to configure this so that maintainers can use that if they want. And you also want to relate other security tools available on the GitHub actions marketplace. So, can you can you help me understand or help us understand a little more detail on the security scanning tools for the pipeline. The containers team track talked about sneak security scanning Docker images but I'm assuming that's not what this is describing. Can you give me a little more detail on that. So sneak has multiple flavors. Actually, there are multiple products offered offered by the company. But the core product is basically security scan and dependency analysis, including licenses, but also vulnerabilities, etc. This is what the Linux foundation offers as a part of the effects. So effects is a platform for projects that are a dozen of different tools available. And one of them is security scanning. So currently they are configured for Jenkins repositories but we didn't have access. So this is something we need to fix. In case what it's doing is if, if for instance, the get plug in is wanting to understand have a security scan they will scan looking for vulnerable dependencies, or is this more at some do some other level, vulnerable dependencies and the source code analysis this what is enabled by default. Yeah, so it's it's got a source code analysis, something like an advanced form of fine bugs kind of thing where it's actually looking for issues in the sources. Okay. So this is what we used to, we tried it at work and to find that great at scale, at least on the ignoring, ignoring like false positive wise it was quite bad and what at the time you definitely couldn't couldn't ignore false positives. And so we, we abandoned it. Yeah, that's a part of the problem on LFX reports, something like 80,000 vulnerabilities. Transitive dependencies, for example, an old Jenkins core, because yeah when it does the scan a similar to other projects. It cannot make an assumption that okay it's not just the dependency it's standalone component. We have had the same machine with other scans, for example for dependency check I created a set of rules which would exclude the issues reported to the Jenkins core and other plugins. That's a way in that using is it's got quite good exclusion support. But basically now we can generate these exclusions from security advisor metadata, but the result of it is just a huge number of false positives because it's just has no idea with this Jenkins packaging format. It applies to many other tools. Okay, so the issue there then is that a transitive dependency in a Jenkins plugin may in fact not be not be included or or it's a version is included that's much newer than the version spec described in the transitive dependency. So Jenkins abuses maybe in packaging system. So basically what happens in pomex ML you declare define dependency on a plugin as a jar. Then maybe an HPI plugin does some major tricks in order to determine whether it's a jar or a plugin and based on that does packaging. So for security tool, which is not aware about maybe an HPI plugin there is no way to say whether there is a real dependency or plugin dependency. For Jenkins core it's a bit different because the dependencies provided but still the issue is just there. Or maybe it's not even provided. Maybe it's also declared dependency. Got it. Thank you. So on a different topic the code QL scans. I know that I was I was one of the early experimenters with those and found them quite positive. Because they, they, from in my case, new things specifically about the Jenkins code base and about problems that are that are not generic, but they're rather very much. Oh, this is a, this is a type of problem we see in Jenkins. So what what does it mean to widely roll those out that means they'll be more available will plug in maintainers still need to choose to subscribe or will that be, is it a mission that that would be turned turned on for everybody and suddenly we've got a lot of noise or what what's the the idea there. So this is something we discussed. And if I recall correctly, the intention is to rather enable it globally, but Daniel may correct me if I'm wrong. I think the problem for the default way that this is supposed to work through GitHub actions and GitHub status checks is that the configuration needs to be in each individual repository. So even once I make the Jenkins specific could kill warnings or detectors public. They need still would need to be configured on a per repository basis as would any security scan that integrates with GitHub code scanning. The current approach where it's run on project infrastructure and I just update the default branches findings periodically does not have this limitation but obviously isn't as nice as having status is integrated in pull requests and regularly scanning when as on commit. So I'm not sure there's a great solution here other than perhaps programmatically creating pull requests that adds at the action and basically advertise it that way. Oh, okay, so I had failed to comprehend that I could actually have code QL scans enabled on my pull requests in in in the future where a PR comes in and code QL will scan it before I even do the code review. So yes, so I would like to distinguish between code QL in general, which is just yet another static code analysis tool you may know from LGDM.com. And the Jenkins security scan I basically use code QL rip out everything that you know is about general Java APIs and add a bunch of Jenkins stuff in there. You can already have code QL because it's public and provided by GitHub, you need to add an action and you have it. This is the future I also envision for the Jenkins security scan based on code QL, but we're not there yet, because so far the rules are sort of in private beta as I mentioned before. Okay, so so they'll eventually transition from private beta to to more widely available but then then likely globally available to all plugins to all all Jenkins repositories. And if I enable it myself in my plugin do I then I then get also PR reviews included, or will it PR reviews be included when it's deployed globally or don't know yet. I think it integrates with pull requests because it would use the same APIs as the existing GitHub security scans use and that's a big selling point for those, but I have not yet tried to integrate it to that level. Great. Thank you. Hey, other questions from others. All right. Thanks very very much so I think the next step then is some some portion of this next week or or tomorrow ish gets conceptualized into the Jenkins roadmap. Well, like you sort of been our lead on roadmap. How would you see that working. You just go through notes and create a pull request is roadmap items. So, if track leaders want to pay to their own requests, it's also a way. And do you ever. So you'd be okay if I created a pull request, just trying to read notes or do you want to be the one to create the pull request for these these ideas. You're welcome to do that. Okay, the security part to me. Or you can just split attack later offline. Okay, other organizers. Great. Excellent. Thanks. Anything else on this topic before we go forward. So mine is containers and platforms. We had a discussion on on well the key results here were first. We want to continue the roadmap work that we had actually outlined already in 2020. So our roadmap work included delivering we want in the future to deliver multi platform Docker images with support for arm 64 system 390 mainframes and power PC 64. Now the motivation for system 390 and power PC may not be obvious to to this group. The crucial motivation is IBM is investing engineering talent in helping us improve our Docker images. And it's a good healthy, healthy change to say let's find a way to do multi platform will get arm 64 as a result of that, and the system 390 mainframe and their power PC platform. So we're very grateful to Jim Crowley for his work with the, the platform SIG, and we're going to continue that. We've had additional challenges with governance of our Docker images. In particular, it's not clear right now, which of the images are well maintained and which ones are not as well maintained. So the idea that we discussed was we'd like to submit a Jenkins enhancement proposal a Jeff, which proposes the concept of a Docker image maintainer, and that if an image does not have a it would be marked as adopt this image. The idea being that we want to encourage community contributors by seeing that the image is up for adoption that Oh, they might benefit by being involved. As an example, the complexity here is, I'm very interested in the Debian internet image and somewhat interested in the Alpine image. So those two get my attention but I don't pay any attention to the centos image. And yet we know there are people who value that image very much but we've not communicated hey we need a new main additional maintainers for that. The idea is, let's, let's create the concept use the concept of code owners from GitHub, and use that to identify maintainers for specific images. If it's un-maintained we need to find a way to flag it, probably in the Docker hub read me that it's, that is, it is up for adoption. And so we both found and thanks to Daniel for highlighting this one that our Docker image build process is just too slow. And, and we've got solid agreement we need to accelerate the speed at which we can build Docker images that may involve optimizations in build technique. It may involve reducing the number of images we build. We've done to all sorts of things like that but we think accelerating the Docker image build process is good for the health of the project and good for the image maintenance. The additional item we've added is we would like to regularly scan Docker images for issues I've been running a prototype with sneak. And, okay the number of image issues that they report are somewhat, somewhat dismaying in terms of while the base Debian image has a lot of, of issues that they're reporting as unresolved and no mitigation. And so it's, it's going to be a little complicated for us to, to use that and find a way to handle the volume I think it's somewhat similar to the, the scanning problem mentioned in this securing the delivery pipeline topic. And then in terms of cloud, we see that we're continuing to use the helm charts for the Jenkins, Jenkins controller, and we know that we're using more and more helm charts for Jenkins infrastructure we think those continue. And we, we look forward to the cloud native work that's going on on Kubernetes operators. Any questions on those topics. How would the image ownership be exposed to administrators or users for plugins it's directly visible in Jenkins and the plugin manager, as well as on the plugin site. But if I'm a normal Jenkins administrator and I talk or poll or Docker run, how would I be informed that the images currently un-maintained. It's that one's not clear to us one of the ideas we discussed, but haven't had any any implementation or proof of concept was wondering, should we consider adding a what I think is called an administrative monitor, a warning that hey you're running a Docker image and we've checked back and see that it's marked as up for adoption. We did think that we could do badges on the read me is pretty easily. What we weren't sure of is if the administrative monitor idea was worth considering I'm open to suggestions there do you have do you have recommendations. And just add something basically something that we also discussed was to to introduce a concept where we would just stop building image for a specific distribution or for a specific use case as well. So maybe if we realize that we don't have a maintainer for specific images for three months, and we are not merging peers, maybe we can just say we stop providing that image. So we there is no way to get notified if an image is not maintained, if an image is maintained or it's not maintained anymore, except by not being able to download it. I mean, you pull the latest tag anyway. So people would just end up complaining that their image variant doesn't exist yet and we would get dozens and dozens of people reporting and then we're like well nobody's maintaining it so we're no longer publishing in it and they were like well thank you for not telling us. I don't think it's a particularly practical approach. I like the idea of the admin monitor. I've thought about that recently in the context of it would be useful to be able to tell Jenkins how it's being run through a startup parameter. I'm just integrating that into the default packaging stuff and the images could say well I'm this kind of Docker image variant. So that could be combined with that. And the other is, I think if I run the image, there's probably a startup shell script of some sort that could also call to a well known URL in the Jenkins project and see whether it is still actively maintained and if not show a giant warning before Jenkins actually starts up to say well you're probably not going to get image security updates. There is also a third thing we mentioned that could be used for that particular topic. These are using image labels, which may not a moment of time if we consider a given image to be deprecated. We override the tag, we push exactly the same contents except that a label that mentioned the current support status from the community is switched from supported to deprecated for instance. That could also help, especially in environment where you have some let's say admission controller that validates the metadata of the images. That could trigger warning or help administrator, because if we give these rules they can do whatever they need to check only by checking the label. That could be also a third complementary option. So and those labels are those from the label schema that we were discussing Damian, can you describe a little further there. Yes, there are label schemas, it's more Garrett, the specialist on that part, but there are two standard label schemas on OCI and the second one. So they provide a set of recommended label to sets like the URL of the Git repository where the source that was used to build a given image is located. The date, time, eventually the commits of that specific repository, but there are some more. Here, I'm not speaking about a standard label, but a custom label that holds the Docker images that we provide as a community. And the label could be something like io.jenkins.communitysupport equals and a string and a string could be either supported, deprecated, unsupported security alerts. So the idea will be to only update the label of given image when we know there is an issue in particular security issue. It won't fix all the issues, but it's one of the numerous way we can have to communicate information additionally to what you already said. Yeah, but you would need to also update the image to check the labels. So you would need to self-check and to print some warnings. That is correct. Even preferably in the Jenkins web UI somehow, for example, administrative monitor which checks this metadata. So it's possible, but it's only possible for new images, not for historical ones. Oh, you can, we already override images when there is an upstream update on the parent image. Tags are not immutable on Docker and they should not be considered. It's really important to advertise that tags can be overridden for important updates. And so since we can override a tag, we can override the label. But that's some machinery to hide. So your rights first with the new images. But still, I mean, there is an API in the Docker hub. So scripted at least the old tags existing and that update them to have a JSON file that maps the status of each tag and each support. This is something that can be technically done. It's a bit of time to spend, but it's not a complex one. It's only a matter of time. Any, any other comments or concerns there. All right, so I have one more slide. Things that are not on the platforms and containers 2021 list. And I'm intentionally noting here that we've got a roadmap item for Java 15 plus support. And it was not discussed and my recommendation is that we will, we do not plan during 2021 to breach this even if Java 17 releases. I think we have other things that are higher priority and higher value to the Jenkins project than doing Java 17 support. Now, I'm open to your comments here because I fear this may be very controversial. People may say, no, that's not acceptable. If the new LTS for Java is Java 17 and is available. Tim, did you have a question or concern there? I was just going to say to my knowledge it works. The Jenkins core tests pass on Java 15. There's possibly edge cases that don't work, but pipeline works. We're starting to see people with higher than Java 11 versions are starting to appear in usage stats now. So when JNA was updated in 2.274 that made pipeline start working. And I ran the unit tests in Jenkins core on 11 to 15 and made some minor adjustments to a test only fixes, but it all passes. It's just not regularly tested. Oh, that's wonderful. That's great news. Thank you for that result. Oleg, did you have a comment or concern there? No, I think that if we can at least test it in our pipelines, it's something we should do. And once we see that it actually works, I mean, Java 17 LTS, we can just mark the market as preview support. Similarly to how we did with Java 11. Good. Okay. Now, I have not been following the Java 15 and beyond release schedule. Are there those on the call who know the estimated timeline for Java 17? Is it, is it likely in 2021? Or is it more likely 2022 or 2021? Autumn of 2021. Okay. So we could see it this, we likely will see it this calendar year. Okay. All right. So, so the, for now, the Java 15 plus roadmap item will stay in that column where it is right now, which is future. And if, if things work out that we could move it from there to, to in progress, that would be great as well. Okay. Any, any other questions here before we go on to Kara. All right. So at the cloud native sake track, we, we discussed some of the initiatives that we're already working on within the cloud native sake, which you're all welcome to join. It's Fridays at port 200 UTC. It's quite exciting because we're really looking at how to refine and shape a Jenkins cloud native future. So there's a lot of ideas, which is nice. I think you can go to the next slide. One of the things I really liked about this track is that we had, we had people who don't necessarily aren't able to make it to the same meeting, the regular Friday one. And so we had different, different approaches, different concerns, and different suggestions and ideas of how to move forward, which I thought was really, really cool. So we looked back on the existing roadmap. And it became clear that some of our users, they struggle when there's like the restart charms of Jenkins basically so when it fails the restart times and there were different ways of approaching how to improve this. Plugable storage was definitely brought up. And this is already on the roadmap. But if you look at the cloud native plug of storage page on Jenkins IO, there's actually already a lot of jobs on this a lot and they have been going this discussion has been ongoing for for some time. So we, we went over what had been discussed what POC's were were already in existence, and other ways of approaching the problem, or a way to improve, say, restart time. So one of the ideas that Gareth has been looking at was improving what the webhooks but I'm going to let Gareth talk more about that because he's done an awesome POC on that. So that that was an interesting approach to that problem and I'm relatively new to this that particular problem of plugable storage was Jenkins so it was really interesting discussion for me. And we also discussed some of the initiatives that working on in the cloud native sake around looking at. I guess integrations with pipeline so integrations with tecton. So we were looking at the tecton client plugin, and also cloud events plugin so the cloud events discussion was quite interesting, because it very much. It really brings together sort of a way of having greater interoperability. Also, there's a lot of discussion with how tecton is doing cloud events. We briefly discussed that this is not particularly the genius project but really interesting we discussed the four keys open source project which takes cloud events from different CI CD tools, and will create a really nice dashboard for you looking at some key door metrics. And that's a very nice use case of how you would use cloud events. I will likely bring that forward into GSOC and let the students, especially the student, if we have students working on the cloud events plugin. They can then use this project for testing some of their work and seeing immediately the, you know, how their work is beneficial so that was quite cool. And then, yeah, so then we discussed more about data cloud events. Okay, I will let Gareth speak more especially about his work. Oh, yeah. Yeah, so the one of the frustrations that was was coming up was having Jenkins lose all of its work events when it's being restarted and every start sort of takes somewhere between five and 10 minutes that's quite a common thing. You're going to lose information, especially if your Jenkins is quite busy builds aren't going to trigger and then you have to go and manage to kick them off again so it was just, yeah, it's a bit annoying. So we felt we could, we could probably solve this quite easily by offloading. We could use a sort of webhook handlers, or just to a sort of store and forward with a book relay that would work with Kubernetes quite well. So it created a little Pock to see if it would work and it does work quite nicely. I'll pop the link to the Pock in the demo in the chat. People are interested. It's kind of like a back off strategy to relay webhooks. So if it can't connect it will just keep retrying and retry gets longer and longer up until like a max time that you can set. And hopefully Jenkins can recover in that period of time but we should be able to add a series of strategies in there quite easily one of them we were thinking is to write the webhook to CRD and just read from those CRDs. You can run with multiple instances of the webhook thing, but it's already in Golang, it's very small, very small footprint. It runs with very low sort of memory overhead and CPU overhead so you can just pop it in the same namespace as Jenkins exposes its own ingress. Yeah. So now, and will that, how will that interact with the with the automatic webhook registration that happens in Jenkins already would, would that somehow be magically included in that automatic registration or does have to change my webhooks to point at the at that at this alternate endpoint. So that is an outstanding question that came up when we were discussing this because we didn't know how that worked. There's a, there's an advanced option in the, if you are using that which I think a lot of places don't use it. But if you are using it there's a, if you click advanced, this is the ultimate hostname for Jenkins. So you can put an alternate hostname in and then click re-register all hooks. I'm not sure if it will delete your old ones if your hostname changes but Okay, so, so there is already a facility to support this kind of pattern. I would just have to use it. Thanks. Okay. It's mostly used for like split DNS where you've just internally you've got a hostname and externally something different that's coming in. Any other questions for Gareth or for Kara for that matter. All right, next step then advocacy and events. Let's see Alyssa you are here great. Thanks, Mark. All right. Hi everybody. So, in an effort to foster the, the community relations and grow the Jenkins brand awareness. We would like to propose the following ideas be added to the roadmap. The first one is education outreach. The, the intent here is to establish Jenkins awareness and usage while a person is still in the college system. So this is kind of like the mindset of getting out the horse before the cart, if you will, right the hope here is that these students will become experienced Jenkins users and eventually down the road that they would want to contribute to the project. So similar case to the GSOC student that Kara mentioned the other day where the previous year he was he was a candidate and then the following year he's coming back to become a believe a mentor or something like that Kara. But, but that is the intention is to get more awareness of Jenkins because we know that once the students graduate and go into the workforce. Most of them will most likely be using Jenkins. So, so that's that for DevOps world community track. So, much like previous years, cloud bees has given us a track, which will consist of 24 sessions, we can do whatever we want to do with it. I thought it would be a good way for us to strategize how we want to shape this track, right how much or how little do we want Jenkins content. Is there a specific message that we want to amplify and this is our chance to do it here because DevOps world is our biggest event of the year. There are other things that we want to cover within the community. How do we want this, this track to look like. So I'm hoping that we can come together and and identify or strategize this. And also, you know, the people that are signing up for this will also be reviewing proposed talks submissions through the CFP diversity in the Jenkins community. The goal here is to establish diversity and inclusion. So we know that what we're typically typically are saying is that the people are currently involved are pretty much the same usual suspects right similar in nationality and gender age. Etc. So this beg the question, what can we do to bring a wider broader range of people to contribute to the project. And we know there's just so much to do. Right. There's so many ways to participate. So how do we how do we get more people, diversity, inclusion, make it more welcoming, making it more easy for people to contribute to the project. Eliminate the barrier to entry. And so, so far, the plan here is increase the outreach efforts to bring more people to Jenkins reinforce the Google summer of code which I think is doing excellent year by year mentor she code Africa contribution in April. And highlight the people of the Jenkins project. And the thought here is that there's so many people that are doing great things right that are contributing in great ways, but we're not really highlighting them. We want to make sure that they feel special and that we give them a platform to showcase what they do. So we really appreciate the contributions and that effort, I hope that we will continue to be able to grow and foster the community in a way that we want which is diversity and inclusion. The last item that we'd like to propose to add to the roadmap is to encourage new members to advocate for Jenkins, get more people to participate in the SIG. You know, we, again, we know that there's, we get the same people that is contributing to the project. So, how do we get more people to just to help us and participate. Any questions. Thanks everybody. Maybe one question. So for 2021 would we like to organize any specific hackathons or community events. Good point. Yes, and I think we should absolutely that it feels like we've got Hacktoberfest that wouldn't predict will arrive. Great results last year with the UI UX Hackfest in it was what was that May, if I remember right that we did it. I think a May or a, I don't know if it's May or if it's in April in conjunction with Sheikot Africa. If we did a Hackfest for the month of month of May or month of April might be a great fit for a way to do greater outreach. I think that we have a couple of opportunities that we got CD con in June as well. We want to do something there and of course DevOps world in September. Yeah, at CD con. It's for the moment idea, but actually it would be so fun if we has, I wouldn't want to exclude other projects but it would be really fun to do a hackathon around the time of CD con, which was Jenkins and tecton because there's no discussion in the cloud native track. Basically, I realized with the tecton plugin one of the things where we try and do with Jenkins is actually to run tecton pipelines at least in partially. There are initiatives with tecton or this under discussion to run Jenkins jobs from tecton. I just think this is hilarious and I think it's really interesting to be really fun to to put more people together to discuss these these initiatives. So, I, I think that would be awesome if we could somehow a range of sort of hackathon Jenkins and tecton, probably around CD con would make the most sense. Good idea. Okay. So, Alyssa, I'm going to assume that you'll take notes here sorry that I'm not taking notes live but I'm sharing my screen. Okay. Yep. It's perfect and we've got the recording if we need to refer back to the recording to. Yes. I would suggest that if you're interested in outreach, I would suggest to go to places where predominantly that community exists. Say, Detroit places like that, supposed to all the same things the same conferences. So, so you say places, you did you say Detroit. I'm, can you elaborate a little bit further. Yeah, if you if the if the idea is to outreach efforts to bring people to Jenkins, and you want a diversity, look for them where they are. I'm based in South Florida. So we're, there's a lot of Latinos and Hispanics here. Hang out with the universities in that area. Same things in Detroit, right. New Orleans. Right. That's a great idea. Thank you. Thanks very much. Yeah. Exactly. Don't forget the non university or students. I don't know for the US because I'm not based there but in a wrap there are, there is a big split between university that are like high level studies that only some privileged people because of all human work can access. We also have a lot of people that are doing night courses or alternative schools that might not give you a master of engineering or development professional training, but still there are people changing careers coming from also other unexpected backgrounds. And that's also a good source of people that are willing to contribute learning coming from different minorities in terms of genders different backgrounds. So don't forget because most of the time we think about university because it students are good, let's say, garden to grow with, but they are not only and if we really want to push forward that project it's one of the way of doing this. It's the only or the main but that's an interesting part to consider. Cara you've been involved in building a community in the London area. Any insights that your experience there might help guide us on. Yes, I think Damien is entirely correct. Thank you for saying that Damien I think that's wonderful. Certainly, I've been very involved with code bar, I've been involved in in London area because I live in London but co bars actually global organization so we have. We have a lot of chapters around the world and different countries and cities. So that is an interesting place to start to engage and certainly, I would say in almost any, any large city. We have a lot of tech made up groups, like a ton, and people really are always trying to learn and it is good to outreach. It doesn't just have to be a Jenkins outreach group or a CI, CI CD outreach group. Certainly considering on a more broad level just developers who are trying to learn and bringing Jenkins to them. That is a great idea. So it prompts, I may need to look locally here you make me think that I, I rarely suggest a proposal to present to local people that I'm nearby and yet they're probably as interested as the others are good thanks very much. Any other feedback for Alyssa. Okay. Next topic then documentation. So documentation was a reminder for me of, of how worldwide our organization is we did two tracks intentionally the east track and the west track. And we did that because the west track was the middle of the night for the east track. And, and it worked quite well. So thanks very much to the participants there. Yep, we want to continue the 2020 roadmap work. I misstated in our opening session that 50% of plugins that migrated docs I was wrong it was one third. Sorry that I am apparently math challenged. We've had over 600 plugins migrate their documentation and that's great but we have over 1800. So we've got, we've still got quite a ways to go and plug in doc migration. Likewise the wiki document migration will continue. The pace is relatively slow there because the work requires skills in assessing the accuracy of the wiki documentation that in many cases could be as much as 10 or 12 years old. So we want to continue terminology cleanup. We've got deprecated terms that are used in many places in user visible strings that we'd like to replace. And that's actually a good candidate as a project as part of the she code Africa effort in in in April terminology cleanup would be a good one for that. Likewise cleanup of the screenshots that are on the Jenkins that IO site right now, we've got screenshots that are based on Jenkins versions prior to 2.277.1. So before the tables to divs introduction tables to divs makes a much nicer UI but it means we need to update the screenshots. So for the outreach efforts, we'd like to mentor a 2021 Google season of Docs writer. Now that's a little challenging because the application period closes in less than one month. So the docs SIG will be working on getting that our application as a project. There are some financial complexities there that they've changed their methodology a little bit that we have to work through with the board to see if we can understand how to do it successfully. So we've got our last year's Google season of Docs writer highlighted to us that the onboarding experience could use improving. And so we're going to work on improving the onboarding experience and have several different ideas of ways to make that experience better so that people can more readily contribute to documentation. The next piece is improving our documentation sites. So we've already got a running prototype thanks sincerely to Gavin Morgan to Olivia Vanine and to Jonathan more eyes from Brazil. For their suggestions and the implementation right now if you look at plugins dot Jenkins.io, you'll see a little icon that says search by Algolia. So what we're getting there is where we used to search with an embedded version of elastic search inside the plug inside API. Gavin in the last few days has switched it over to use an Algolia based search engine that is providing the the search results for us. And so the improvement is, is nice it's still under work there's still some surprises, some things that we're having to work out, but it looks very positive the intent is that Olivier and Gavin will work together to register us for the Algolia open source program, and we'll use the plugin sites at site as an experiment to prepare for the eventual use of that same indexing technology on the www dot Jenkins.io site. Next, or any questions so far so I should have asked for questions see if anybody had concerns or issues they wanted to raise. Okay, next then is, we've realized that we've got an awful lot of documentation we've got a large body of documentation in varying stages the wiki in particular has has collected documentation and knowledge from people. So over the course of 10 or more years. However, we don't have a systematic. Where do we put that information when we transform it. And we realized as a as a doc sig that we need to structure where is that going to go in the future, even if that arrival is still a year or two away. We need to know that so that we've understood do we have the right structure for our documentation. And so what we're going to do over the course of the next several months in the office hours for the documentation special interest group is we will look at where would this content go and this content go. Jonathan Morris had started the process for us, and the 1000 plus pages that we had have been initially reviewed will do a more detailed review and work through that. Now, one more piece of that is, we've got currently a confluence wiki running that's running we read only. So that wiki needs to migrate because we'd rather not have to upgrade the confluence server. It's read only it's not providing an awful lot of help for users in terms of the wiki facilities of it. It's just acting like a web server. And one of the ideas that was discussed was, what if we batch transform those wiki pages to ASCII doc, host them inside of a GitHub repository, and then redirect the wiki to that site. The idea is that would now let us incrementally and stepwise use those ASCII doc files, eventually, as the basis for content in www dot Jenkins.io. In general, immediately usable because they're varying levels of quality varying levels of accuracy. Sometimes it's just plain flawed information that we have to filter out before we go to Jenkins.io. Any any questions or concerns there. All right. Oh, go ahead. Yeah, it's it's all this. It's all these documentation searchable easily everybody goes to Google all the time to search for it. If your, your observation is correct right now, all of the content is is found through Google searches. Right there isn't a site search for www dot Jenkins.io at all. We envision that being there. But right now the solution for typical users is Google search or Bing search or you choose your search engine. Any any other comments? Just a question of timing. Because once we teach down the entire wiki, we will be able to show down wiki exporter service or maybe please buy something else. But yeah. So now I was assuming that we would oh no, I guess we wouldn't you're right wiki exporter, even for plugin documentation wiki exporter. This would resolve that as well right because it would have already converted the any remaining plugin documentation on the Jenkins wiki would now be ASCII doc, but stored in this separate repository, awaiting them. So, well, it's not a big achievement, but it's still something to keep in mind. But yeah, finally stopping confluence it would be a great thing, especially for able not only to make a documentation but actually to renew remove obsolete documentation, because many pages are just not relevant anymore. And even if you do such migration, maybe it was reviewing pages and removing once we do not want to see. Right. Okay. Other comments or concerns that covered it now Jesse I saw that you had arrived. I went I am happy to go back and talk to securing the delivery pipeline again if, if there are concerns that you had where there were their topics you wanted to be sure to address. I don't think so I actually have to drop again. Right. Okay. I'm glad it got talked about. All right. Anyone else are there concerns or issues things that you would like to like to be sure that we, we discuss here before we close. Does anyone want to say a blue ocean and Jenkins classic. Oh, oh boy that's a tease yes absolutely show it Tim that's great. We have the blue ocean companies to Jenkins classic. Yeah. Yeah go for it Tim this this will be an archive point that we will now show fun demo let's see I probably have to allow you to share. Now I can see. Oh good. Okay. Cool. So, so this was based on a hack project that Cliff Myers did about four months ago, he lifted the graph components out to just on a standalone view on the job. So it's called the pipeline graph view plugin. I think there's a screenshot in here but so it's all based off of this. So I've created three sample jobs with three different sort of pipelines. We've got like a very empty pipeline that's just got a few stages. So you'll see here is just completely empty kind of standard looking pipeline. You can go in here on to a build. I've installed the blue ocean plugin just so I can compare it back to the regular one. Because here pipeline graphs you've got the whole blue ocean graph here. You go to a failed build with a few different varieties it's got back here. You'll see. There's a variety of different stages, a unstable success, errant in court, and a full on failure. And if you got a pipeline graph, you'll see that's all working. And then a much more complicated one with parallel stages and nested branches and so yeah this is declarative as well. So all this sort of regular parallel nested inside of parallel. And you see it's mostly working. The only bit that's not working is I haven't quite managed to identify the second nested stage properly. But apart from that skipped parallel parallel nested stages label on the branch. And this is a regular problem in blue ocean if you've ever tried to use matrix it's unreadable without hovering over it. It's probably quite an easy fix but another problem with blue ocean is that this graph is actually in a different repo in a different package. So even if someone were to try and fix this they'd have to try and change the repo another repo and try and get someone to release that package and then get it into blue ocean. So very high barrier to entry to just on the code. So Cliff is just lifted this out of that package. He's converted it to TypeScript from JavaScript and from JSX to TSX. So very minimal changes just ripping stuff out that's not needed. I've lifted some very minimal code from blue ocean. So I've lifted the pipeline node graph visitor just to recreate the graph and then just the minimum remove some code that's not needed and lifted in just some the minimum supporting code needed by the node graph visitor. And then there's just a simple action which just pulls in the icon and has a web method to just load the graph. So it's just a get request which loads in JSON and then some very gnarly code to convert from the pipeline node graph visitors children. So parent parent approach where you get, but the response has a list of parents, but the graph expects a list of children so you have to reverse the graph and rejoin us and it's pretty nasty code. But you just see some, it's probably it's very hacky but it mostly works. And then there's a couple of sample pipelines in the tests. And then some tests to make sure it's all building properly. And then that's just a certain off the generated graph. Yeah, I suppose you could have something at the job level. So when you use the pipeline stage view plugin for example it shows at the job level. So you can see multiple builds going or at least get at least see the current build maybe we could have something at the job level that just embeds the graph of the last build or something like that. Yeah, it's quite trivial to implement. Yeah, probably be a separate component with or just like a simplified graph. Kind of like the, the existing pipeline stage view thing but not as complex and not as heavy as that's that doesn't perform very well at all. It's far too heavy. Just like a much nicer one without log support probably, which is a lot lighter. So is there a solution to still use? No, it's just got its own. So if you go here, here, and then here. So it's not blue ocean at all. So just a new visualization being generated. So I'd say just just a web method here in the action. And then just using Jackson to serialize a list of stages, which is a projo basically with a list of children or a sibling. And then just a few other fields and whatnot. So it means we don't use pops up and other things of lotion. No, not at all. Which is probably good. Yeah, the plan was just to look at, look at polling while the build is running and then once it's complete, you never need to re-render it. And you're possibly even storing the state in a file on disk because you never need to recalculate it once the job was completed either. Yeah, so historical, historical graphs don't have to be recomputed. I mean, the job is done. It's not going to change. You indicated putting it into the build would be done without the log. So that and that was because pipeline stage views log, log rendering is expensive or painful or or not terribly useful. It's very slow. If you ever load any. So if you look, I mean, a lot of people use pipeline stage view because it's good to be able to see a combination of history timings across bills on average. It also pulls in the logs and just a whole bunch of stuff. It's a it's very slow until it's cashed really poor user experience, but it kind of works. Excellent. Boy, Tim, that is marvelous. Thank you. Thank you very much. Okay, so is that being discussed in the UX SIG where where should people go if they would like to be have more opportunities to talk about and see what your what your what your experiment is doing. Yeah, it's very quick. So Damian asked about it at the last meeting and I figured I'd spend some time now just hacking on it because we thought we thought it was really hard, but it wasn't actually that hard. That's awesome. Thank you. Isn't it an opportunity to call for contributor related to what we said on the advocacy outreaching. I think this will be really rewarding for introducing new contributor because it's visual. So the amount of knowledge required to put the hands on such thing but their type script which is a, let's say the usual contributor might not have that kind of knowledge or not experts. So technically and in terms of the feedback group it could be really rewarding and interesting to call for help on that part. What do you think. Is it only an idea or I really want to advise. I think at least Jesse Oleg and Tim, because this is really deep dive on. I don't want to ask for contributor if they stumble across a meeting to adding something deep dive on the core of Jenkins that can be quite another issue. If you want to feel them welcome. I think that this plugin is relatively simple, because it's basically not a minus the approach we tried to do before it's just the code is just generating a data schema and visualizing it. And the bigger part is where do you take it next. So it's do you put in like SCM information. Do you want a new logs viewer. And the suggestion before recreating a better job widget to visualize the jobs over time and. It's not too far away from like a MVP, but it's is it this plugin or is it another one where you extend it with some of the other suggestions. Okay, so and coordinating that in the UX SIG so encouraging people to come come join the UX SIG and get involved. Is is the repository public team that that they could already start exploring in it or not yet ready. I pushed everything on my fork of cliffs one planned probably host it maybe next week. Yeah, probably host it next week and release now for so if it's nearly ready. Hopefully, let's see if I can fix the last couple of minor issues today. If I have time. Yeah, it's it's all pushed it's all on my fork. Thanks for for all the work there or anyone that has been involved on that that's really neat from from user perspective at least that's really cool. The question is it possible to repeat names on the task on the multiple branch that you have there. So nested one nested one. So these here these are just the stage names if that's what you're asking about. Yeah, yeah, I wasn't sure if you could repeat names like that on status. Yeah, it doesn't care. They're not unique. They've got they get different IDs in the graph so if you if you do inspect the metadata there is different IDs that are used. But apart from that, no they're not unique. Thanks Tim that's that's amazing. That was that. Thank you. Thank you. Looking forward to that. Any other topics to review here. I guess I have one shameless plug Jenkins to 277.1 will release March the 10th. That's the expectation anyway. We have about two weeks to do verification of the release candidate. Tim's published the release candidate, please take it up and test it. This is a major change right we've got the extreme on fork we've got the CG to string spring security improvements we've got jQuery update and we've got tables to divs in this release this is a major major release great chance to help us test it. All right, thanks everybody recording will be available separately. Thank you for joining the contributor summit, I will send a retrospective survey to everyone who registered we had 69 or so registered and encourage you to complete it that survey will go out likely early next week. We'd like to try this again we think but we want to learn more how we do it better. Thanks a bunch. Thank you mark for running the summit well done. Thank you.