 Welcome. This is Jenkins documentation office hours. It's the 17th of November. Thanks for joining us. Today, we've got contributor spotlight, LTS 2.426.1, Jenkins contributor summit, Google summer of code preparations, and update CLI, I think it's settled actually. And then if there's anything to discuss on version documentation, any other topics we want to put on the list. Okay, so let's take it from there. Chris, are you at a point where you'd like to show contributor spotlight? Are you wanting to spend just a minute on LTS before we give you the, the screen share? Maybe the LTS first. Okay, good. So Darren, we just released yesterday, a new LTS, and Darren and Mark provided a 50 minute live stream review of the features. And the URL for that, where is it? What's, oh, it's in, it's actually tweeted just a minute while I bring it up out of Twitter. And so it was, there is a bunch that went into this latest LTS, really quite impressive. And so let's, let's find that. Here we go. All right, this is the, I'll embed the video URL here so that we've got it. So here's the video and the change log and release notes are available. Our change log and upgrade guide are available. This is the one that supports Java 21, removes prototype JS, drops support for Red Hat Enterprise Linux 7 and several other interesting changes. So all those things looking good. That's all that I had on LTS 2.426.1. Chris, are you ready to show us contributor spotlight? Yep. All right, let's stop the sharing. Here we go. Okay, so should be this one. The thing I changed is for the image on this page. Yes. Yes, it works now. Whereas the other ones are dummies. Right. Do you have a click on that? I'd be glad to contribute to the details page because all the details are contributing to the interview. Excellent. Oh, this looks great, Chris. This looks absolutely exceptional. That's wonderful. Okay. So now what are the next steps then? Does do we need, certainly we need the ops team, Damian and Erwe and Stefan to be ready to deploy the site. Have they set up CI for you yet? The audio is cutting in and out. I'm getting every third syllable. Oh, oh, oh, that probably means I've got the wrong setup. I stopped my video thinking maybe it was me. Just a minute. And I'm hearing everything that Chris says. So it might be you. Oh, okay. Yeah, it could be. Yeah, let's figure. So you have to be, huh? Well, my mic is set correctly. Let me do a check. Okay. Test, one, two. So I'm not sure what's causing me to break up. I'll, well, let's keep going, Chris. Show us. There, there, there. I'm getting it all. Okay, great. Who knows? So Chris, in terms of the, what are, what sort of things are the next steps for us then are there, are there, we know that infra needs to deploy and we need a CI setup for it. Yep. What other things are on the list? We may need preview setup for it. Okay. So we do process. Good. Not the moment. All right, thank you. Thanks very much. This looks, looks amazing. Thank you, thank you. Really looks good, Chris. You're welcome. Who decides who's going to be put up there and are you doing all the writing for it, Chris? No, no, no, Chris. So the, the writing actually comes from Kevin Martins. Okay. But the decision on who comes from data. We're really kind of proud of this actually. So we started a project to identify the top 30 contributors. And we did that by looking at, initially we looked at poll requests and saw poll requests. Now we're actually gathering poll requests and comments. And it's a cool story to tell. It's an amazing thing to see the mix of people who they are and where they're from. But you don't, we don't have a way, we've got the number of poll requests, but we couldn't, if somebody, I don't think anybody's doing this, but we say you come in and test and review PRs and stuff like that. And we can't capture that quickly, right? Well, PR comments, we certainly can. John Marks been looking at PR comments. The thing that we, that's much more difficult for us to capture, we haven't captured yet what about the people who help in getter chat channels? What about the people who help in the Jenkins user mailing list? What about people who are helping in other places or who are, and those we aren't as well suited for? So the data we're gathering is imperfect, but it's certainly better than no data. Absolutely. Super. So thanks very much. We're ready to go on to the next topic. Yep. Okay. So next topic then was Jenkins contributor summit at Fosdom. So in Brussels, February the second. Interesting. Why is Jank? Oh, here we go. Yeah. On February, February 2nd of 2024, we will have the Jenkins contributor summit in Brussels. John Mark has found us a venue where we can meet. And we've already got commitments from Uli Hoffner. He plans to be there. And we expect Alex Brandus to be there. Certainly we know that Damien will be there. Damien DuPortal and Stefan Merrill and Ervel Lemieux. And we hope that I'll be there and that Alyssa will be there and several others. So we're looking forward to it. It's gonna be a great, and this picture is from last year. So I'm proud of it. Very good. Any questions on Fosdom? Okay, next topic then. GSoc 2024 preparation. Chris, do you wanna give us an overview? Yep. So we have had two meetings to discuss and to align this to a project page. And now we have just four from last year's. We're at it with new ideas from Mark, me, and also Deos on the GSoc SIG team. I think Harvey made a suggestion for a UI project too. So that might be interesting. Oh, good. Okay, that's a cool idea. Good. All right. I was like, we didn't have those ads page. Hope we still get our pages right now, but they may need some proper UI experience. Yeah. So, Ervel, I think he was reporting this one and this thing is a very stone knife kind of user experience, right? You look at this picture that's... Yeah, I see. Yeah. And likewise, this one, now it's incredibly valuable what the data is that we have here, but the data is... The presentation is... Well, it's ancient presentation to remind people of the kind of really cool presentation we get. If we look at what Gavin Mogan has done with the plugin site, here is one little tiny piece of data that I use all the time. And that thing comes from the same data source, but much nicer presentation here rather than the thing that we saw, tables of data or graphs with no labels, no symbols, anything. So much, much better. Now, the details are still available, so I can still click this and it will in fact load up all the details and show me very interesting things. Good, good idea, Chris. I like that. Okay. And I will publish the new ideas to James and I will make sure that I think, or at least I'll open the PR for it. Great, all right. Yeah, we need to add more details to each part of our idea as we go. Well, and that's one that I've got to do some more on because certainly some of the ideas we had listed last time a number of ideas for documentation. So Meg, this is one that is a very real problem that we have right now with the documentation site. When I look at jankins.io, there's this thing called an extensions index, but the extensions index is generated by a program. And unfortunately, the program is broken by plugins using modern development techniques. And so this list, you see, oh, it looks like, hey, there are a lot of things here. Except when you look at the list, you realize, oh, but there's no get plugin. Oops, and there's no SCM API plugin and there's no GitHub branch source plugin. And so what this is right now is this incredible shrinking list that's failing to show us all of the possible extensions that are actually out there. Yes. And the suggestion here is the tool needs to be rewritten. Rewritten, one, because its current technique is painfully slow and heavyweight. And the other is it's a very good project, possibly for a shorter project rather than a medium or long project. Cool. All right, so that's what I had for now, Chris, version documentation, anything you want to share there? We're still working on that, but I think I've already said something about updating the version to documentation to the latest version, but I will new check them. So once it's done that we can spit out and we can host it separately. So we can then have like both, you know, into one site and the Gatsby site. All right, so that the Gatsby site, the one that, so Meg to catch you up on this one, whereas the Jenkins site with its documentation has a kind of convoluted and weak navigation on the left-hand panel, what the version documentation site that Andit Singh has been working on as part of, as part of Google Summer of Code 2023 and beyond, it gives us a lot nicer navigation and it gives us version documentation. So you'll be able to look at the documentation as it was for a particular version of Jenkins that you have installed. Oh, that would be nice. Right, it's an enormous work. It's amazing how much effort that Chris and Vandeet have had to put into it. It's just absolutely phenomenal. So here's how it looks, though, and you can see the navigation is clean and elegant and the layout of the pages is attractive. It's very nicely done. Oh, very nice work. So Chris, thanks again for your amazing work on it. And let me, I'm gonna put a link to that page into the current notes just to be sure that we've got it. Any other topics we need to discuss today? I have a half-assed thing to show it. I'm just a half-formed thought for you. Sorry, I forgot my Mormon language. The captain team swears like more than I do and so I'm getting into bad habits again. I was reading old minutes there and saw there was an interest in open telemetry. Is that for stuff running on Kubernetes or non-Kubernetes? Both, actually. So open telemetry in this case is mostly about there's an open telemetry data source that's available for Jenkins and that it's willing to provide data to open telemetry consumers to open telemetry, I guess receivers, I don't know what the word is to describe them, but things like Datadog or like Rafauna and the idea here is we need more detailed view of some of our particularly expensive jobs on ci.jankens.io. So for example, we have one job that runs 10 virtual machines for an hour or more and we'd like to understand is there something we could do to save time in that thing to reduce costs? We have another, we have another job that runs 600 and 10 plus containers in parallel some for 30 minutes or more. And the question then is, is there a way that we could use open telemetry to see inside those things a little better to understand why there, how we could save money? Uh-huh. For the Kubernetes thing, have you thought about just using Kepton? Good question. So I confess, I don't know what Kepton would do for us. Tell me what it would do for us. Well, the new Kepton, but it's all cloud native. Good job. So for that, but we do support open telemetry with metrics, Kepton metrics, which pick up Kubernetes metrics as well as any metrics you define yourself. And we now have an analysis function in there too that will run analysis for you on your data. And it's sophisticated. You can weight it. You can have multiple things being evaluated, et cetera. It's, I'm biased and naive, but I think it looks very cool. Interesting. Okay, so I'll need to do some more looking and see. So it's like, yeah, I'll need to do some more research. I can give you, if you, do you want to flip over to open a new tab and go to Kepton.sh? Sure, here we go. Where we are not as sophisticated as Jenkins, we're a junior, we're starting out. So it's not as pretty. I'm going to docs. Uh-huh. Getting started. Okay. And kept an observability. Hmm, okay. And it's a, you can read through that. And then in the left frame, click on user guides and go to analysis. Analysis is just brand new. We've just gotten that. Okay. But that now works with metrics and open telemetry. Interesting. Okay, so. It's writing a blog post. So is the fundamental thing here then that we describe what we expect from our SLIs or SLOs and Kepton will then monitor that for us and give us. You can, yes. Yes. Wow. All you have to do is install Kepton on your cluster and you will begin picking up open telemetry. Oh, and you have to have your open telemetry provider defined. But a couple of trivial things and it will start collecting metrics and open telemetry. It's there. You can add in additional metrics of your own if you want. And then you can put the analysis on it. The difference being metrics goes constantly. Your analysis you run when you want to and you say, I want to run it for this time span or I want to run it for the last 10 minutes. Or whatever you can do either one. Okay. And it uses the, you can query whatever database you're using. I think we currently support Prometheus, Datadog and Dynatrace, of course. And you can use multiples of any of them. But then you put your queries in. Your queries are in Golang. So you can put in variables to your analysis definition so that then each time you run the analysis, you can say, use this value. So you can reuse that definition for different criteria. The feature is new, but the people who put it in are some people who've been working with this stuff for a long time. So fairly sophisticated. And I'm actually just reviewing, I think within a week or so, we're gonna have a blog post about the analysis. I'm just reviewing it. I need to tighten up his prose a little bit, but it's very good. Cool. Okay. So the idea here then is it, there, we ask captain to track data or to track open telemetry and it then exports that data, makes it visible to us. Yes. And you can display it on Grafana or Jager or whatever else you want. I think Jager is part of Grafana or something. It gives you the full end to end trace of a deployment. So you can see where the bottlenecks are quickly. So it appears to be quite true, quite interesting. Thank you. Thanks very much for the pointer. Thank you. I'll point it to Damien and others as well. Thank you. And if anybody wanted to, we do have two community meetings which are typically very boring. We're not getting a lot of participation. So you or Damien could pop into one of the community meetings and talk to the developers and get some details and see if it works. Thank you. Yes. Damien, I tell Damien I miss him so much and I feel like it allows. I certainly will. He smiled when I told him that you and I had dinner together. That's great. Thank you. Yeah. Anything else you wanted to highlight on captain? No, no, that's it. I mean, you can also, if you want on Kubernetes, the other cool thing, but they're separate, you can run them separately. The other thing that's pretty cool if you're doing like a deployment is you can have captain intercept the scheduler. So, because Kubernetes sort of does each service individually to microservice. And you can say like these five microservices are all part of my deployment and don't spin up the pods until everything is ready for all of them. And it can also run tasks after the deployment. Okay. Which comes in handy. But it is, it's one product. You can run both parts or only one. It'd be hard to do the task stuff actually without the think possible to do some. But the observability parts you could do completely without the task stuff. So, but you can poke around the documentation which is, I don't know, maybe C plus, B minus quality right now, but there's some information there. Thank you. Thanks very much. And we have a Slack channel. Tell Damian he can join us on Slack and... I will pass the word along. Okay. Anything else, Meg? No, I will shut up now. All right. Well, thank you. Thanks very much. And they're coming in and pitching my new project, but... Thank you for sharing another open source project. That's great. Any other topics for today? Chris, I assume none from you then. Yep. All right. Let's call today done. I'll stop the recording. Thanks very much. Okay. Good to talk to you. I'll have a good week. Oh, thanks. We're not meeting.