 Welcome to the Jenkins documentation office hours it's 29th of March. So today's agenda I've got she code Africa contributon Jenkins docs monthly metrics from the docs sign notes, and then the actual data from those. Any other topics you want to put on the list make that so you had mentioned that we might want to talk about review user feedback from the Jenkins.io site. And that's part of that is actually included in this contributon but let's we could do that separately as well. I don't know what the process is I don't know if somebody goes through when you see stuff and file issues for later and I mean sometimes you see things that oh I can fix that or whatever but. And that's exactly right in fact that user feedback is one was one of the key drivers for what we're doing in contributon. Okay, so let's let's they fit very nicely together they dovetail really well. All right, so the contributon reminder for their own it's it's by the way it's right on the top level front page now we put it on the jumbotron. Oh yeah. So here is here's the contributon we will take it down from the jumbotron in a few days. Okay, more info is available here. And, and what we're doing is so we use the feedback sheet that you were mentioning in in this so here is the detailed feedback sheet. This collects comments from all the way back in November of 2017 up to just today actually. We have received feedback today on and yesterday so one point of feedback each there, and we use that feedback as a way to identify a series of tasks. We're going to invite these women from she code Africa to be involved in. And those tasks are in helping us improve the pipeline steps reference, and the pipeline online help with more examples, and better description of step return values and step arguments. So the crucial thing here is that this thing will start actually on Thursday the first right so we're going to we're going to start that. Zina Boba car has stated that she will schedule in a launch session for us. Meg, it may not be at a time that works well for you. I don't know and that's okay. Zina was going to I think she was proposing to put it during our Thursday office hours time. I think it's about six in the morning my time or something. And no it's it, but it is. It's more like seven or eight in the morning so it's even less likely that you're functional. Right so it's, it's, I might try to do it. Let me see. Up to you. It's, it's a it's at the end is tough, but it might be fun to be there for the kickoff. Great. So, and now she did request. They had enough, enough funding to fund additional women to assist. And so whereas we had proposed, we had initially proposed three that we would mentor three on the task. I talked with Kristen and with others and confirmed that hey we think we can, they will, they will provide nine. Wow. So we will mentor them. Then these nine women. So, in the tasks. So it's the, the idea now, if one of the points there was if they run out of things now, let's let's talk about what that would mean running out of things because the, for me that will be absolutely interesting if they run out because let's see where is our task sheet. There's so many steps and exactly, and that's that's precisely it now where is the somewhere in this document there is a link to the pivot table here we go. This pivot table is is created from the, you have to go to the pivot table form. Okay, in this pivot table we have the list of how many people, how many items of feedback there were by page. And it's filtered so it's only showing pipeline steps. And you see that I'm scrolling. Yeah, we've got lots and lots of places where people have given feedback on pipeline steps saying, and if we look at the columns here. And that almost all of them were either listed as unhelpful or not that helpful meaning every one of these needs some improvement. There are some, it's not all that are that way but very many of them are. Hey, for instance, there are, there are the vast majority of them in the first 20 in the bottom 20 or so so the highest 20 are all on the unhelpful or not helpful to improve in every case. You know, if these nine women close all of these I think we let them take the rest of the money and have a holiday. Well, well so so Zina actually had a better suggestion than that hers was. If we resolve faster or more than expected, then they are willing to assist with terminology improvements. And the benefit there is that because many of the terminology improvements need to be done inside plugins. Those improvements would need these same skills anyway, you've got to be able to compile a plug in loaded into Jenkins, see that your change to the terminology did not did not break things, and that it's still presented to the user correctly. So we get, we get benefit. If they're faster than we expect, then we'll have them assist with the terminology improvement project as well. Back to the pivot table and those things. Do we have any way of marking those that this has been resolved or something like that so that is that table going to look different after we do all of this work. If it comes in in six months is going to see that the steps documentation is a disaster. So, the pivot table won't be updated because it's just a snapshot in time of the data and it's a snapshot of what I did was take a separate copy that's why it's declared copy of that copy and and used it because I needed to do the analysis. I don't know if these resolve know but I think we've got a system that we will need to do which is have some way that they can declare that they are working on a particular thing. And it may just be that we have them put their name here over on the right hand side and say, okay, I am, I am the person working on this one. I'm going to map to a GitHub issue right. Well, most of these probably are not actually tracking issues in GitHub we might do that's a good point we might want to do it as a Jira issue. And, and that, maybe that's even a better way to say it is Jira. Jira issue. And then owner or author or yeah so. I'm also using Jira. Yes, yes, so it uses Jira to track bugs and so the Jira issue here, one of the one of the instructions we give them is to submit a to create a Jira issue for their right where was it. Because I remember you guided me on that I like that a lot. It was issue. Here we go submit an issue to report the missing online help. It's specific. I was thinking GitHub issue when I said it so. Ah, right. So let's we should say if it's Jira issue let's make it a Jira issue. Well and that's why it's saying on the Jenkins issue tracker. Okay. So the intent is, they need to support an issue submit an issue and it will be through the Jenkins the Jenkins issue tracker. And the assumption that that the Jenkins issue tracker has is the place where this particular plugins are tracking their bugs, and for most of them, that's expected to be the case. I haven't checked every one of them, but in general plugins track their issues in in Jira rather than in GitHub issues. Okay, good. Okay. And then that I mean if I come in, if I come and look at this in a year, at least then I see the ticket. And if I'm worried about I can click on the ticket and see if it was, you know, close now or still open or whatever. Correct. Right. So the idea is, hey, we'll create a Jira issue here, link to it so. And then the owner tells us who was working on it. We can use that those two pieces of information to help us understand what's happening. Yes. Maybe what we say is developer developer slash author. Good. Okay, so, so thus far we've, we've been through that those two things there. And I assume that this is the kind of thing that we'll need to present. I think Xenobo likely I think call on me to, to describe how they're going to do this and what they're going to do how that how the contributors will do it. Actually, let's call it a participant. There we go. Okay. What happened to digest. I just lose my filter. No. Okay, good. It's just showing. All right, got it. It's in the pivot table and out. Okay. So again, that's now the, the tasks that we've got for pipeline example improvements are quite well defined. Right. Those pipeline example tasks are, all right, reasonably detailed. The, if we get to technology improvements, or to terminology improvements, we don't have a detailed breakdown there and that will require more interaction between us and them because there are some places where they simply cannot do a terminology update. For instance, they can't change the word slave in the API method master slave callable because they'll break all sorts of compatibility. Right. So, so we're, we're not ready to do that, but text messages and strings we can, and that's where we've got to, got to be sure that they understand if we get to that, but hey, this is, this is how you do it and where you can't do it. Mm hmm. Okay, let's see anything else on the, oh, I should make a note there. So, are we do we have more people to help with mentoring I saw Angelica did anybody have an Angelica. Yes, yes, so we've got launch session on Thursday. And we've got additional volunteers, additional mentoring volunteers. So, Angelica, your own like Nanash of now neither of them have said who will be there every week, but both of them have offered to assist if there are specific questions or specific needs, and they're both active plugin developers who have maintained plugins and understand some of the challenges that go with developing a plugin. I mean, Oleg's got so much going on Angelica might be willing to just join in. Yeah, she she what she said her phrasing was she was not willing to commit, but was happy to assist. Okay. Nothing else having somebody like her just review the PRs would probably be invaluable. Exactly. And that's that's that's that is very real. Right. These, these pull requests, particularly early pull requests will probably need more more changes more corrections than than likely later pull requests and that's where we really could use her help and help from others to say, ah, this is this was good. No, you need to change this instead to be this other thing. Yes. Okay, good. Anything else on she called Africa. That looks good. Do we know how many hits that page is getting on Jenkins. Just curious if if I do not that's a fair question but I don't. I don't regularly look at I'm not sure how I would dig out the Google page, the page hits. I think the art liberal and me is just I think this is a cool project it would be nice if they got a little more. I agree. Well, and that's why that's why I was so so enamored with the idea of putting it on the, put it on the front page. Right. One more. One more way to try to persuade people come help us. Yeah. All right. So next topic was review your feedback so let's. Two different sources of user feedback so there are GitHub issues. And then there are recent comments in the sheet. And let me open up the feedback sheet box feedback details here we go. Okay so we'll embed a link to that into the that one and then for GitHub issues. We go here. And we'll take a look at this one. Okay, good. All right, so how about let's take a look at them both. Okay. Most recent bug reports, 10 days ago, and about six weeks ago on one on on an issue with a doctor based tutorial on a particular environment that works on other environments. So is that a bug in the doctors that to that Debbie and testing environment. I don't know which, because that that strange, this strange mount point does not exist message doesn't help happen on Debbie and 10 on the released version of Devin Debbie and it only happens on the pre release, the next version. I don't know if that means it we will be broken when this next version releases in the next few months. Right. We, they will fix it before we release in the next few months I, I don't know I've, I've seen the message I recorded it that look. This is what it says and I can't explain it. And then I confirmed that it works just fine on Debbie and 10. So on the official release version of Debbie. But how does this work to the, the people who are doing the new Debbie inversion do they automatically know that this is here. They don't, and there is, but my attempts to use their bug reporting system all went down in complete flames. It's, it's a, it's more complicated than my skills were ready to do. Then that means it's really complicated because of it. Well, it was just it was, it was, they really they wanted a certain set of things that my Debian systems don't have for instance they wanted ability to send email publicly, and it's like oh I don't configure my machines to be able to send email because I don't want them to be used for spam. Right. Yeah. And so, so, but it's a good point I should. I probably ought to ask attempt to tweet it somewhere just to say hey, I couldn't figure out how to get to report a bug but there's this difference in behavior between Debbie and 10 and Debbie and 11 in the docker packaging that might be of interest. Right. So let me make a note to myself. It's a complicated world. I wanted to tweet about a Debian 10 difference Debian, a difference between Debian 10 occur and Debian 11. And it's, it's not a shock that there's a difference because docker is is in fact, a separate product right it's, it's documented how do you do it. How do you use it. And they haven't written their documentation for how to do it with Debian 11 on the docker site, the official all I'm using is a docker package that's provided by the Debian project. Right. It behaves at least somewhat differently in this case and the somewhat differently gets in our way. All right so that's that's a good action item for me let me put that. Got it. Okay. Now, other feedback. Back to taking a look at it so this pipeline syntax page has an error is is what they're asking for is, and it's a good fit for general documentation saying hey, it's saying something more. That's incorrect in certain ways. Hey, it would be better if we could correct it to be more precise. The next one down is actually a request for an entire category of documentation on the rest API and there's a project in Google Summer of Code. That is a project idea in Google Summer of Code that proposes to automate the generation of Jenkins rest API documentation. And so that's this is this is really this could be years of work to describe all of the Jenkins rest API endpoints. And those kinds of things. Now we're getting into feedback that arrived last year on for instance, our engine X reverse proxy configuration page needs this needs somebody who knows the engine X configuration set up very very well and can as an expert evaluate hey is this correct or not. Right. And that's that's not me I'm a happy user of engine X, but I am certainly not configuration expert enough to know if it's absolutely right so those are the kinds of things that. And then we've got a bunch of things from Jonathan's creation of migration steps that just need someone to work on them. So that's the that's the view from GitHub. Now the view from the. And okay so just for clarity the GitHub page comes from this Jenkins here we go so if I look at, let's say the installing Jenkins page, improve this page takes me to get up. And report a problem. Takes me to get up to report an issue. Okay, those those are those are the two sources these two links down here at the bottom of the page are the two sources of GitHub issue entries. Right, or GitHub poll requests. This link right here was this page helpful is the one that feeds into the spreadsheet. Right. So I can say yes or no. Or I can open up this form. And now it asked me for more details. And that those details get inserted into this sheet here. All right. Now, the. Ultimately we we get one or two of these, maybe three a week. So it's, it's there are not a lot of people who give them give this feedback to us, but the feedback that we've got because it's across three plus years four plus years now is helpful in guiding all right where should we where are people continuing to make comments and how can we use that. Right. Most recently, for example, this row right here 1072 and let's see if I can make this big enough to actually read. The user said hey it was modeling moderately helpful, and they asked, is there a type of here should this say something different. So I actually opened it up and looked at it. And their question was, it says now available. Not available. Yeah. And their question was should it say now available. I can tell the page is correct it says, let's make this big enough to read. Okay, so it's describing the navigation bar in blue ocean. There's the logo pipelines, administration and it says this button is not available if you do not have administrator permission. And I think that's correctly stated, right. I mean, yeah. If this said now available that that for me would be wrong. Right. Good. So, so, yes, they gave us feedback. There isn't any way for us to say, oh, we're done with this feedback we've processed it. I don't have right permission to this sheet and I actually don't want right permission. The sheet is interestingly enough Meg owned apparently by Tammy Fox. Interesting. Yeah, I'm not entirely sure how she ended up owning it. Giles implemented this didn't he, he did work for her. Okay, that's it so as as his, his transition, his ownership of a transition to her somehow. Yeah, maybe he kept owning it and then when he left the company everything just went to her. Yeah, maybe I've, I've considered asking her to give me permission to it, but, but read only has been just fine actually thus far that the feedback is not so frequent, or so, or so brutal that I don't just can't just, you know, there's there's some, there's some profanity in it and there's some times when it's a little indelicate, but, but in general it's been okay. I suspect she'd happily give up, give it up. Yeah, yeah. She's not a big fan of open source and fan or not in this case it's, this is not a page I'm sure that she interacts with at all and so right. So, but as an example, this this last feedback is one that we know is an issue. The Hello World tour this pipeline tour is is sort of it's what's clicked through the guided tour. And it's riddled with challenges because it describes things in completely imperfectly and makes assumptions that, oh you've already installed the Docker plugin. But we forgot to tell you to do that. And, and those kinds of things that that it's this one is is a is not a great experience for a first time user and we wish it were but these using the build tools tends to be a much better experience for first time users. However, they are much much more detailed right I mean they are just full of do this and do this and do this. Right. So that first pipeline tutorial might have been Giles's first project when he started and it's a it's a it's a fine project in terms of what it does and, and but it's still it has some assumptions about what did you do in this as you were getting to this point and those assumptions are not always valid. So the follow the instructions to complete the installation is there are several paths you could take while still saying you follow the instructions and only only one of those paths will actually arrive at functional a functional pipeline. Right. All right, so let's, let's back out from here. Here the for me the message inside the user feedback continues. Give more pipeline examples and pipeline docs. And I still think that is an indicator of not not realizing many users. They don't realize that the syntax generate the snippet generated exists. They think they need to read the documentation as a man page and create the code right from the man pages like that's right. That's not what you need to do. And then the other and the other thing I think I, I hate these hello world things I don't think they're valid. I like the, the skeleton we start out with in the class actually where they do a series of echo statements. That say here's where we're going to build and here's we're going to test. I think that's a much better, you know, high level intro than doing hello world, because hello world has nothing to do with what we're doing here. Yeah, good point. Right. Yeah, and that's, and that's, that's more specific to the, the tutorial, the guided tour is is. That's it poorly, poorly described or needs more details. Right. Because I have just for myself I mean I know a lot of this stuff now sort of because I've written about a lot of pieces of it and read a lot of it. And then if I go look at the Jenkins files that are being used to build our docs I have no idea what they're doing. Oh, it makes no sense to me. And that's what I need not, not detail not great details like this is the syntax of this step but why, why does this Jenkins file look like this. And some of it's going to be that it's calling shared library it's calling a shared library function here and you know that level sort of a 50,000 mile perspective but detailed. Right. So a higher level view with enough deep detail to be clear. Lots and lots and lots of links, you know, right. But I look at them and like, I don't see a lot of lines that have steps and, you know, um, there's some stuff that I've that I know about so there's there's a whole art here to these Jenkins files that is not being captured anywhere. Right. Yeah, good point. Very good. Yep. Okay. And meanwhile, well, no comment. I would say that when we're not being recorded. Okay. All right. So anything else on oops, excuse my scrolling, anything else on user feedback. No, that looks good. Okay, so next piece was monthly metrics from the doc signal. So one of the things that we'd started in the doc special interest group. Since it began, we've been using various forms of measurements as a way to see, are we healthy or not in terms of how we're processing contributions to the Jenkins project. And so that's what this is trying to do is help us see how are we doing so first thing at the end of each month. And of each month. Before doc sig meeting, our gathers metrics and publishes them. And so we have a record for the last 1218 maybe 24 months of these kinds of these kinds of measurements. Okay. And they're retained in the doc sig meeting notes. So that we haven't archived. So here's the story. And so first piece was what's our GitHub issue picture look like and this is, are we being overwhelmed by issues that are arriving. And this month we have 112 open. Last month we had 114. Okay. So we stayed current with issue reports we didn't grow significantly that's a total number open not the number of new ones filed. Correct. Right. That's right. And if we looked, let's go ahead and look at that number now that's a good question because we probably ought to add that. I think that data is available if I look at insights here let's look at issues first so closed and if we filter by. I'm not sure I know how to filter by last month. So I think there's a way that we could do that. See which ones have been closed if I look at the list of closed. So here we've got closed nine days ago 14 days ago 12 days ago 10 days ago 26 days ago 26 days ago. So we've got 123456 that we've closed in the last month. Right. And therefore we must have not had more than that opened in the last month. Right. Consider adding progress. Be about these statistics watching all the COVID news. Yeah, you know, you give me better numbers, better numbers. Exactly. And that's, oh, excuse me. Didn't mean to sneeze into the microphone there. It's a very nice. No, no, I have, I have Tom sneezes or leave. Okay. Very good. No, just, just an hour out walking so fresh air. Yep. So this graph is, is gathered by the dev stats site provided for us by the Linux foundation. What they do is they look at GitHub and extract from GitHub from the Jenkins.io site. The time before commits or activities from someone other than the author of a poll request. So how long between the time someone submits a poll request and the first response from somebody else. And this is a view over the last two years. The range on the left ranging from up to three weeks at the, at the sort of three, three weeks at this at the peak to zero. Right down to no time or in this case, a hundred, a tenth of an hour. So what we see here is that we had wider variation. We've stabilized that over a period from early 2020, all the way towards the end of 2020 where now we're attending during 2021 down here in the typically our 85th, 85th percentile says that we are in the two to four hours maximum before someone gets a response to a poll request. That's great. It is. And this is this is largely thanks to Markey Jackson. I wish I could claim something other than that, but he does a great job of giving feedback very rapidly on poll requests. So what is it a request if I file a poll request or oh no. These are poor. Never mind. I was going to ask about the ones that the person who filed it. No, that would never mind. Go ahead and ask your question. I was thinking about JIRA tickets. If I file a JIRA ticket and then turn around and fix it. Is that off the picture? Oh, and it's not because it's still a PR. And somebody's still playing. Well, yeah. This does not show JIRA interactions at all. This is just looking at GitHub. That's what I just figured out. Right. Slow. I'm waking up slowly today. No, no, that's, that's great. So, so that picture for me right now says, okay, this data says that we're reasonably healthy. Right. We're prompt in our response to. In our first responses to a poll request arriving. And that big peak we've gotten January is still less than 24 hours. Right. That's correct. So this one. It was actually the peak here is. Why? Now I'm not sure I understand what I'm seeing there because as I hover over this. Over that. Oh, there we go. Okay. I just have to hover a little more carefully. The peak there is 13 hours. And so that's still less than a day. And that's 85th percentile. The, the median time is still. 10 minutes. Right. 10% or 10% of an hour. You must have had a couple of really big nasty ones come in or something. Well, or somebody took some sleep. Yeah. I'm amazed that that number is not 24 hours because. Hey, we get contributions. We can receive. Contributions 24 hours a day. And most of us don't work 24 hours a day. So either most of those contributions are coming in at a time that somebody is on. Or we've got people around the world watching and responding. Right. It's a little bit of both, right? Exactly. Yeah. I think that's the bulk of the bulk of the submissions probably come from geographies where there are people watching, right? Yeah. Correct. Right. So the bulk of the submissions are coming from time zones where we also have people watching. Yeah. So this is good. Okay. So time from PR to engagement. Now, time from PR open to merge is one that is not as good a story, but it's still a useful story. So let's. Hear what we, if we look at the live graph so we can see it full screen. The axes here. So seven day moving average. Of time from opened to merge. So that's the, that's the, the question, right? Is how long does it take for us to get to that? So that's the, that's the question, right? Is how long does it take for us to get from open to merge? And the measurement on the left is for the green line. So the axis on the left ranges from zero to six weeks. All right. And that's the 85th percentile on time to merge. And the axis on the right is the 15th percentile. On time to merge. So this is the very bottom picture here, the bottom number. So. If I look at this. Right now, 85th percentile time to merge is. Eight hours. Actually pretty good, right? Pretty decent. But if we look backwards in time, it's, there are times where it's a day. Other times where it gets. Much, much worse, like several days. Or here we see. Even, even more than that where it's, oh, let's see, this one is six days. And we have, this is an indication that we have some PRs that are, and we can look at it and see, right? We have some PRs that are languishing, that are not getting delivered to users. And then you can see the hint here. Here's the one from Zenab. That was an addition for Kubernetes that is been there since November of 2020. And that's actually one of the younger ones of these. Ancient ones. Our oldest. Is, is back here. A year before that. So, so we've got now some of these like this one that's the oldest is because it's a proposal to do something that depends on software. And so we can't merge it until the software does what this says. Right. And I've been tempted to say, hey, we'll close it until the software is ready. Others are contributions that came in from, from Hacktoberfest or UI Hackfest where we asked for changes and we haven't had the changes and none of us have been able to go in and take the time to go make the changes ourselves. Right. So yes, there are, we've got areas to work on this one in particular, scaling Jenkins on Kubernetes is a really awkward one for me. Shame on me that it's not merged yet. And we've got an agreed plan that will, that says what to do with this. Zenab agreed. And to what Oleg and Markey were proposing. So I've just got to implement it. But what that says is this one is a, highlights a place where we would like to improve. We'd like to get some of those, that old, old lurking pull requests resolved or closed. Okay. So that's open to merge. And that was not as happy a story. Then contributors for last month, we had 52 merged pull requests, one new issue and six closed. So here's the, what it feels like for the month. And that's, that's available right from the, right from the GitHub UI. So if we look at this page here under insights, you can see, okay, for a period of one month, we've at this point had 63 merged pull requests. We had one, we have one that's currently open that's been open in the last month, we have one that's currently closed and one new issue opened. So, so the story here is, is okay. I think this story. Now what the piece that we get from this, that is not as healthy is number of contributors. We've got a smaller set of contributors than we'd like. And we look across this list of contributors and we know the people, right? Right. Oleg Bartek, Daniel Beck, Pierre Bates, Kim Jacome, Kara Delamark. This one is, oh, this one is, is a new contributor. So that's a, that's a big win. And then Gareth Evans, Gavin Mogan of the, the board, Evelina Velcos of the Jenkins board, Marky Jackson, Olivia Vanine, our infrastructure officers. So you can see that these are people we know, and it would be much better if our contributors were not as widely known as, as these are. And how do we get more interest in people to contribute? Yeah, we're working that. That's a, that's a, Oleg's got some ideas that he's working there to try to help increase and improve the number of contributors. Goodie. Next piece of data that we had was open poll requests. And at, when I took this sample, there were 25. And right now there are 23. So, so we're, we're, we've stayed, we've kept things in this mid-20s range for an extended period. Right. And that's, okay, we're not losing. We're not getting more contributions than we can process, but we're not catching up either. Right. So Jonathan's contribution here, hasn't been merged yet because it needs changes. So this one hasn't been merged yet. So we've got, we've got a number of pending contributions that we need somebody to take the time to go ahead and merge. Right. Or do the changes necessary and then merge. Okay. So that, that gives us the picture of docs contribution directly. Then there's plugin documentation and plugin documentation is a process of converting from, from the old Jenkins wiki into documentation that's right inside the repository of the plugin. Right. And this is one where we conceptually might have been able to use she code Africa, Africa contributors to do it, though it's becoming increasingly increasingly more challenging because we're getting to plugins that have fewer and fewer installed users. So here's the plugin migration progress diagram. We've completed over 650 out of 1800. So roughly one third. We have just almost 30 open pull requests and had about that many merged recently. And if we look, if you watch as I scroll, looking what we're looking for is green is done. Blue is merged, but not yet released. So done, except for a release and an empty line, a line, a line that is a white background. We'll hint to us. Ah, yes, here we are. Okay. No, there are 270,000 installations of Jenkins worldwide. And we don't get to one without documentation as code until we're at the less than 10% point. Right. So it's, it's good progress, right? Gavin Mogen's work in this has been amazing. He's done really great work. I wish we could like normalize that number. Cause when you say 650 out of 1800, it doesn't sound that great. But if we could, but you know, if we could normalize it for like, you know, for hits, just take a number for all of the, all of the installs of all of the plugins. And then of the ones that are fixed, what proportion of those are hit, we have a much more impressive number, right? Right. And I've, I've used that framing. That's, that's a good point. The kind of framing I used in the past was to say. Of the top of the top in plugins where I think the number, if we count here one, two. So about 14 per page. So one, two, three, four, five, six, seven, eight, nine, 10. So of the top roughly 200 plugins. We have maybe five or 10 that aren't documentation is code. Right. So, so the story there is, is quite attractive. This, this particular presentation, this way of seeing the data has helped us motivate people to help with it. Right. So that, that covered the topics that I had. Sorry, we've, we've just about reached our hour. Meg, any other topics you wanted to go over? No, it's been wonderful. All right. We'll, we'll call a stop to it. I'm going to write the, the change log for the weekly Jenkins release that will arrive tomorrow and submit that shortly. And then kick off on Thursday for she code Africa. Exactly. And looked, looked for an invitation from Zenab for the meeting on the launch on Thursday, that what they'll do is launch with each open source project separately so that we can meet with our specific participants and share with them how they get involved. Excellent. See you then. Thanks, Meg. Thanks you. Bye-bye now.