 Hi everybody. Welcome for this new Jenkins infrastructure meeting. We have quite a lot of topics to cover this meeting, especially considering that we had to stop the meeting last week a little bit earlier. So the first one topic that I want to bring here, we started discussing briefly last week, which was, is it the right time to bring back confluence on the Jenkins project? I really invite you. So we started the discussion on the mailing list, the Jenkins infrastructure mailing list. I really invite you to join the discussion and to provide inputs. I think here we are trying to solve different things. So yes, I'm really looking forward to have your feedback on that topic. The next topic that I also want to cover now, we had a discussion with a Linux foundation for an opportunity to use a tool named LFX. So LFX is an interface in front of SNCC. And so it's a tool that we can use to analyze security issues in the project. So by default, it's enabled on the Jenkins CI organization. And we have a small group of people who are testing that. And we could also enable that on the Jenkins infra organization. So we have to approach here. Either we enable it on every deep repository that we have on the Jenkins infrastructure, or we identify a small group of repositories that we want to analyze. And based on that, if everything goes well, maybe we could extend the usage. I don't know if you have any expectation on this topic. I would be really interested to also hear about them. So Oleg started also a discussion on the mailing list quite recently. So I would be really interested to have your feedback there as well. From my point of view, the next step, I will just enable, I was maybe thinking to take small kit repositories, maybe just analyze Docker images or something like that. The idea is really to try to see. I just don't want to put too much pressure on the infra team in the coming weeks. Any expectation on this? So this is the LFX security. And what it does is it provides scanning of images. So scanning of Docker images. Does it also scan source code? Or is it? It scan the source code. But my understanding is that we can also use it to scan the Docker images published on Docker Hub. So I still have basic knowledge because it does not provide all the features from SNCC because it's a layer on top of SNCC. So maybe Oleg can provide more information on that topic. Yeah, here. Just a second, Steven, Adil is also joining. Hi, Davin. Hello. So we were in the Hangouts meeting because apparently it has been added to the calendar green. I just say that in that calendar update that you see in Oleg, I saw a hangout. Just ignore it though. Okay. So the question is about the LFX security, right? Yeah, the question was how can we integrate LFX security with Jenkins infrastructure projects? So the idea here is we have around 100 key repositories. We have different kind of application. We also have Docker images. And so the idea here would be to identify a small group of project that we could use just to understand how the process work and how we use that tool efficiently. Yeah, yeah. Do you need an overview of kind of what I'm hoping to do to integrate it? Or if you could do a quick overview here, that would be really great. Yeah, yeah. Okay. Sure. Is it possible that I could screen share with? So you should let me just stop share. Okay, sounds good. I will do a quick thing here. So yeah, I've been sort of reaching out to the community members to discuss kind of what's involved with integrating LFX security version two. You guys may have seen the LFX security. Oops. Should have came up right. Here we go. You've probably maybe seen the V1 system that we have. This is our interface for this. And if we look for just the Jenkins or other projects that happen to be here, it provides sort of just a roll up of only vulnerabilities. So this is SNCC. Data SNCC is one of the two vendors that we're working with. This one just looks at dependencies and stuff like that. For version two, we're actually bringing in a new vendor. This is a tool called Blue Bracket, which does code secrets. So if I were to log in, this is their interface. It's a bit clunky. So please bear with me. And it's a bit slow. But we're going to bring all the data that they collect as well as the SNCC data into our new portal. So if I go back to SNCC, one of the things that we're going to hear it, we're re-architecting it, where previously we were scanning all the repositories. We were collecting all the data and then presenting it. For version two, we're actually going to have SNCC do the scans. That allows us to, yeah, their website is kind of bad. So sorry about that. But what we're actually going to do is SNCC is going to be responsible for scanning the repositories. That means that we're going to inherit all of their new capabilities that they've added recently, which does include the Docker images. It includes .NET projects, things that we haven't brought in to our platform yet. So we're going to let them sort of do all the scanning on their side and their infrastructure. And then we're going to actually just aggregate those results and present it to everyone in our portal. Blue Bracket, again, is the second vendor that we're going to do. We're working to pull in information from here. What we're going to get in Blue Bracket is things like passwords, tokens, JWT, secrets, things like that. So this tool is designed just to identify any mistakes the developers have added to the repository that includes things that are a little suspicious. It might be an AWS ID. It might be an AWS token. Maybe not. Maybe it's credentials for a test environment. So those are going to be brought into the console so that the community members and maintainers can review that. And then they can make a hard decision whether or not this information is suspect. You know, one of the things that we're going to have to work with this vendor is like, okay, the Zoom password, is that really like a secret? Is it okay if we share it with the community? In a lot of cases, like this case here, like, yeah, it's fine. Just click it and you can come into the Zoom. Other cases where it'll help identify is like if a developer accidentally included like a Terraform state file, which has happened a lot, which happens to have, you know, AWS credentials. So those types of things will be brought in with this tool. So we're hoping that we get some feedback from, you know, our pilot projects. So you can kind of see if this tool is useful. Is it too noisy? Can we maybe tweak how we filter the data such that it brings awareness? The other thing I want to mention really briefly and I can ask questions is this tool here goes through the entire Git history. So if you may uncover or unearth a token that was committed like six months ago, perhaps by a developer who has left the project or left the community. So this will go through all the Git history and do that. One thing we're also hoping to get access to for the V2 platform on SNCC, we're going to add sort of a remediation link or a mechanism such that the community members, you can see the vulnerability. You can click, which will take you and do a SSO login all the way to the SNCC console. And then from there, you can decide if you want to create a pull request to fix it, right? So maybe a lot of times the vulnerabilities will have just a version bump that would resolve it. Maybe it's a Node.js problem or some JavaScript library or a Java logger problem or something. Maybe you just need to bump the library. So we're going to take you through our console over to SNCC through SSO. So you don't have to log in again. And then you can decide how to fix it. For Blue Bracket, we're going to work with them on how to fixing passwords that were committed six months ago is a bit of a problem. You have to actually go and rewrite a bunch of Git history. If any of you guys are super familiar with that, that's a pain in the butt. The last point I want to bring up about LFX security version two is given that we're sort of pushing a new strategy for managing access to your repositories. So instead of me creating a personal access token and then me cloning your project and then me scanning your project with my personal access token and getting details and querying the languages and the commit details and all that stuff, we're actually going to be using a GitHub app, which some of you may be familiar with. And part of this is to allow us to do this at scale. So what happens is with the GitHub app approach, we can install a bot into your organization, which essentially gives us a finite set of permissions. And I've enumerated the permissions here. These are examples of some of the details that we can access in your org. And what we get is like 5,000 requests per org. So that for each of the LF community, I can have multiple instances of the bots all over the place, and it scales a little better. And I've enumerated on this one Confluence page that we have, the minimum set of restrictions that we think we need in order for the vendors, Blue Bracket and SNIC, to be able to scan your repositories. Yeah, so that's the big whirlwind of updates on the differences between V1 and V2 security. Sounds like the Jenkins Infra project is a smaller subset, might be a good candidate. We can choose one or two repos to sort of evaluate this, but we are looking for feedback in early adopters to see if this whole flow works. Are there any particular technologies that you are focusing on? Because the Infra repository has so many different technologies due to various historical reasons. So is there something you're looking for particularly? What was your question specifically? So the question that let me explain that in a different way. So we have quite a lot of different Git repositories. We have a lot of different applications, custom applications, we have Terraform, we have Helm, we have communities, we have Puppet, we have, I mean, we have quite a lot of stuff in the Git repositories. And so the question was, is there a specific project that we should start with? Yeah, good point. So let's talk about each of our vendors. So once SNIC is mostly focused on, you know, software projects, not a lot of CI, CD, like Terraform and Puppet and Chef, they can probably look at like Chef and look at the dependencies that pulls in, and then report if there's some known vulnerabilities and maybe some network tools or whatever. So it's just going to look for dependencies of those tools. So if they're in a manifest file, then those are good candidates that SNIC can easily scan. For Blue Bracket, the code secrets, it doesn't matter, right? It's just looking for you know, keywords and passwords and things that may have been committed in your Git history. So it'll work regardless of the technology. I wanted to mention also that we are reaching out, we've got a demo. Last week, I think there was a company or vendor that specifically was looking at Terraform data. So there was like, I forget the vendor name, it was like Terraform State or something, but they actually were focused on applying policies to Terraform to see if you are complying with best practices. So they had like a list of, you know, policies that you could apply to a repository and it would evaluate to see if you're in compliance. And of course, you can tweak the compliance stuff, but it actually had a lot of intelligence on, you know, how are you using EC2, you know, and all the other AWS resources. And ours is sort of the best way of going. So it makes recommendations. So longer term, that's what we're also hoping to bring in more CI CD infrastructure tools within this platform. Okay, that sounds great. But we also have custom application, let's say, you know, the account app and stuff like that. So we definitely, I mean, we definitely have places where we can, we could test and learn a little bit with the tool. Regarding the GitHub app, I think what we can do with for the next step would just be to allow a specific Git repository. So not the whole organization, but just let's say five absolutely three. Yeah. In that case, would it be possible to have access to the documentation that you just show about the permission that you need? Yeah, I have, so when you, when you click on the app, so what I would probably do is just agree with you like how many repos you want to bring on board. Step one for me is to just set it in my, I basically onboard it and add some entries in my database. Step two is I will give you a link to the GitHub bot. You click on it, you decide if you want all the repos, just like you said, or just a subset. You can review the permissions at that point. I can also give you or share with this group here. I don't know if the links will carry over, but I can drop this in the Zoom chat or maybe share it on Slack somewhere. Otherwise, I think we can have your email address somebody that you made that would be fine. Yeah. Yeah. Exactly. Exactly. So I would be really happy to test that in our current workflow. And then, so I've reached out to the vendors and it's like, I think this is the minimum set. So I'm sort of also negotiating with them. I was like, can you confirm like the links I'll share with you and email if we give, if we do that. I've hyperlinked the exact API calls that are going to be invoked for like reading the email or doing a web hook. There's actually a very specific list of API calls that are there. So yeah, you can review that and decide if it's something that you're comfortable with or not. Okay. But if we can do it that for repository, that would be okay. Yeah. Yeah. So you just choose like when you set it up, you can just, you can select or do all. Yeah. Yeah, that sounds awesome. Any other question for the other folks on the call? If I mean, otherwise, thanks, thanks David for the presentation. And yeah, we're obviously looking for feedback on the security tool and the experience. Ideally, instead of me and you talking, you could, we would direct you to our console and then you could, we would give you permission. You could review it all point, click, point, click and just do it yourself. But it's a little early phases. We're trying to get the admin console going. Yeah. For now, we'll just do it this way. And if my understanding is correct, the V2 is not yet available. The what's not yet available? The V2 version. Yeah. So we have our developers are working on the, for example, the blue bracket, representation. So all the data that's coming back, he's taken that API from the blue bracket and displaying it. So I think in the next coming weeks, we're going to be adding that. And so for you guys who have onboarded, I want to show that to you at some point and get you to start looking at it. And then that's where I think you guys can come back and say, oh, this is good. This is bad. I don't like this. Maybe you should change it. So you guys being an early, you can help influence what it looks like. And maybe we need to add filters and that sort of thing. So you can tell me. Okay. Thanks. Thanks a lot for your time. Yeah. Should I, should I drop now or should I? I think if we don't have another question, you can drop if you like. Otherwise, you can stay with us as you wish. We just talk about specific topics for the Jenkins infrastructure. So feel free to drop off if you prefer. Okay. Thanks. Bye. Bye. Bye. Bye. Thank you. So yeah, regarding the infrastructure, I actually wanted to ask what additional permissions, agreements we need to start that. My understanding that Olivier prefers to go ahead and try out some repositories as long as we protect confidential content within the Jenkins infrastructure organization. I think we could do that. So I think, yeah, definitely, I would like to move forward with that tool. I would just decide on what we want to analyze first and just select a small set of git repositories. I just don't want to introduce too much noise because otherwise we'll be just in your dose. So now the question is what, because my understanding is the V2 is not really yet. So we only have access to the sneak. Yeah. To sneak. And so in that case, it sounds to me that analyzing Docker images is maybe better. So Docker images are a bit complicated because of our architecture. Again, I'm not sure you will be able to use a standard Git Hub integration there because we mesh multiple Docker images within the same repository, particularly when I showed that using our pipelines. So I'm not sure whether it will work. Same for the most of the Jenkins organization. We are waiting for V2 to specify allow lists and denial lists for CVs. Before that, it doesn't make sense to adopt a security in the main organization. And we are waiting for sneak to deliver the future for testing. So that's why we talked about infrastructure because it seems to be one of the most straightforward ways to evaluate something if you want to start now. Oh, like to be sure, did you, I think you said that they're assuming that we don't have more than a single Docker file, for instance, in a repository that it's repository per. So our Docker, our Docker repository. To be evaluated. But yeah, in sneak, there are two ways. One is GitHub integration, which is relatively simple. And you can also use APIs for any HKS you may have. And the specifics of file effects security in the current state that you have no access to these APIs. And until they expose that file effect security to, for example, what we cannot do is, for example, submitting a plugin or a Jenkins code build of materials for analysis, instead of for a standard format. Do you think about doing that in two times that first train with the GitHub integration on a simple Docker image? We have a bunch of Docker dash something on which repository are public. So it will be dry one and see the output from the tool before jumping ahead and integrating. Yeah, maybe it's misunderstood because I have interpreted this Jenkins code and the agent doc images. You interpreted exactly what I was asking. So you understood my question, but I think Damien's got a very good insight that that there may be narrowly focused repositories we can use first before we use the broad repositories. For instance, this one, this is the image we use internally. So that would be a first step. The final target to be on the Jenkins core and agents image is clear. But as a first step, I propose that we try like the one I've put. That's an image that we use to run Terraform on the CI. It's one image, one repo, one Docker file. And if we want to go one step further, we could also analyze the Jenkins cell test or the Jenkins weekly Docker image because it's a simple Docker file. And we have a double advantage there because we may catch errors from the Jenkins upstream that we maintain as well. So I think by carefully selecting the right key repositories, we could also have that could also benefit to the Jenkins CI organization. I can take the that task, that initial task on the both to repository, targeting the GitHub integration. If someone can just point me the instruction or credential or whatever requirement on the Elix, Elix stuff, if it's okay, unless someone else wants to take it. David is ready to create a new organization for us because of the specific of standard integration that you can have one GitHub integration to one single organization. And basically, it means just having one GitHub work. For us, I do not think that it's any kind of a concern because they will be little to no overlap between Jenkins and Jenkins TI in terms of configurations. So I think that we could start Jenkins and Francis and then figure out the rest later. And something that I'm wondering if it would benefit is to analyze repositories like the plugin site API or Jenkins or your website. Because in those Git repositories, my understanding is we are more interested by our credentials that we would leak or something like that. I'm just wondering the benefit of using such a tools. Well, hypothetically, for example, plugin site, there might be XSS somewhere in a component we use. Yeah, I know that it's a long shot, but in theory it could be. And for us, again, it's evaluation for now. So for me, it would be useful to just get as many projects as we can relatively safely and to get some results to see whether it produces complete crap, or whether it produces something potentially feasible for this type of repositories. And then iterate based on that. Because, for example, if it produces, let's say, five warnings for plugin site and these warnings are reasonable just for dependable to pick up, okay, great, this tool works, let's keep it. If it produces 10,000 fails positives like for Jenkins plugins, so then probably we shouldn't give it. It was trying. So it sounds like we have kind of an agreement that we are all interested to move forward. Another question is who will drive that project? We don't have to take a decision right now, but I think that would be a nice opportunity if someone is interested to know a little bit more. Obviously, because of the topic, I would prefer to have someone familiar with the Jenkins project, but I think that would be a great opportunity to delegate a little bit. Tim, what do you think about this project? You were silent during the discussion. I don't know. I've just been burnt by SNCC multiple times before, but I'm interested to see how this goes, but I don't want to be too involved. I've had one very painful interaction recently and tried evaluating it before and it just didn't fit for us. I logged into it basically and it opened 200 progress under my account and just completely spammed my inbox. I had to write a script to close them all. It was horrible. I didn't literally write Java code to do it, because I couldn't find any bulk way. There was no bulk way to close pull requests and getting notifications constantly. I have a t-shirt about false positives. I believe from SNCC or maybe from some other else. So next meeting I will be wearing this t-shirt, I guess. I'm really afraid that it analyzed the old history of the Jenkins infrastructure to detect password that were leaked and stuff like that. We have had to find stuff an hours before because of that. That wasn't in the current history. We opened up one Git repo and it had an old Slack token in it. GitHub tends to be really good at these things. Nobody requires me to install Git secrets on my laptop so that my Git check takes 100 times longer to complete and fine. Yeah, no finger pointing at all. Sounds like the meeting is over in two minutes basically. We won't have the time to cover the other topic. Is there a specific one that you want to briefly cover now? So brief update on AWS sponsorship. We might have good news on AWS sponsorship but it still needs to be concluded. I was keeping that news for once we have more information. You asked for really quick update and it's exactly what I can say. Okay. Thanks Oleg for starting. So Oleg contacted Amazon to see if we could renew the sponsoring program and it seems to be good. But yeah, we still have to solve and to fill some paper. On the one note on the Git 229, I've got a progress on Jenkins.io documenting and recommending the Git 229 process for automating the plug-in releases, moving away from people releasing from their laptops. I think Daniel raised a concern that it's more pressure on infrastructure and I kind of documented two processes where if different levels of infrastructure are down you can work around it. Is there any concerns about the recommendation? I do have concerns. We still have a problem with releasing Jenkins components, not from Jenkins. So basically we say that if you want to do continuous delivery for components, we would rather use GitHub actions than Jenkins. I understand that it's not exactly what we say, but it can be perceived that way. Before we comment the Git 229 process, I think we need to explicitly agree on that. We had discussions at the contributor summit, but I believe they were inconclusive so far. My understanding is that would be difficult to use Jenkins instead of GitHub action. I definitely share your concern. I would also prefer to use Jenkins instead of GitHub action. Okay, but we have multiple ways. We're going to keep, let's say, GitHub 229 in preview for now until we totally agree that it's something we want to widely adopt. In this case, we keep this topic down the road. At the same time, we don't get quite an option of the process for now. Or we do opposite. We say that now GitHub actions is a future for continuous delivery of Jenkins components, and it probably gets a wider adoption, but it sounds a bit weird, to be honest. What I find weird is I don't think that's the thing that we should cover in the Jenkins infra-meeting, because it, I mean... It was a governance meeting decision, I agree. Definitely, because what we can do in the Jenkins infrastructure meeting is to identify ways to improve and to help. So if you want to move forward and implement a reuse JEP 229, if we have dependencies on infrastructure, maybe we can improve, let's say, the monitoring house. I mean, those are the kind of things that we should discuss on the Jenkins infra. I definitely think that's about the implementation. My proposal would be to definitely get the documentation from Team merged to put a work on progress preview notice for now, to have it listed from the navigation as preview, but not to switch it to the default recommendation for now until we reach the agreement. What do you think, Tim? I don't know. I don't think it's a governance board thing. It's a part of the JEP process in the mailing list thing. Yeah, so it's not governance board deciding. It's Jenkins community deciding. We have pending updates to JEP 1, but yeah, I think that we should explicitly reach consensus in the community on this topic. And once we reach that, then it can be applied. Do you want to raise that since you're the one raising the concern or I'm happy to be a bit complicated if needed. For me, the process is there and it is so much better than the process that we currently have. It's so much easier. Nobody argues with it and whatever we do, we need to keep this automation. The question is whether we want to keep it on GitHub actions as a final implementation or whether we just use it as a prototype and then apply some magic passes. Again, I don't think that we should discuss much about that here because it's definitely the best place. So I just propose that we continue that specific discussion on the mailing list so we have more people. Just trying to explain that I am not trying to sidetrack JEP 2.9, but I would like to ensure that we have consensus before we make it a default recommendation for users. So yeah, on my side, I just prefer to move step by step. So if we already have something working, we could all still improve the situation later. But yeah, again, I propose that because we are forming over the meeting, so I propose that you stop the meeting here and that we continue the discussion on the mailing list. Thanks everybody for your time and see you on IRC. Bye bye. Bye.