 And so I'll take notes and rely on others to chat. Perfect, just at the recording. So we are ready to start. Hi, everybody. Today's to the agenda. We are first announcement to say that because we are doing a Jenkins call release in the coming days, please do not modify anything related to relays.ci. So hold on any PR that could affect that service. The second thing that I noticed today is archives at Jenkins.io is really unreliable at the moment. I still have to find some time to investigate what's wrong. What I discovered today is it affected the monitoring check that the check if we can download packages. Cara made a PR about that. So the idea is to use get the Jenkins.io instead of archives to Jenkins.io to monitor if we can download packages, which obviously makes sense, which is the service used by everybody. On another, so the next topic, we made some garrets and David made some nice improvement regarding the way. Sorry. Go on. Just on archives at Jenkins.io. Mark, did you end up reverting your change that made the pipeline not fail if archives was down or did you leave it commented out or disabled? It's good point. I have archives at Jenkins.io. The change I made has been reverted, but it worked last week for the 2.274 release. So I'm not sure how to answer that, Tim. That's a good question. That's probably a risk that it could cause the pipeline to fail. Yes, risk that the 2.275 and 2.263.2 release pipeline be able to upload to archives. However, now, Olivier, because that's a stage, those are stage releases, Daniel's announced that they are security releases. Since they're staged, I assume they are not yet uploaded to archives.jankens.io because they're staged. Yeah, which means that we must be sure that archives.jankens.io is really when we publish the packages because it may fail the publishing script. That's a really good point. Or we re-add marks disabled or true to it because it doesn't really matter if they're right. Oh, yes. That's a really good point. Well, Olivier, do you want to have a conversation with Daniel back to see which is best for him? I think it would be best for now to disable. Sorry. Oh, Daniel. Hello, Daniel. I missed that you're here. Thank you. I didn't want to interrupt the talking, but now that you're mentioning me. So do you have a preference there? Is since those releases are being staged, they aren't immediately dependent on archives.jankens.io. Should we disable the attempt to publish to archives because it's currently not trustworthy online? Well, I may have missed the beginning of this topic. So what is archives used for exactly? So what we need is once we publish it, it needs to be made available to everyone. Yes, archive.jankens.io is used as a fullback. So in the script, when we do the sing.sh, one of the stage is to upload the packages to archive.jankens.io. So it's not directly used, but it may affect the sing.sh script. And basically what's happening with that service is for some reason from time to time, it's really slow. And so we get time out issues. So people do not download artifacts from theirs, but as part of the release process, we upload our artifacts on that machine. So it's probably better to just remove, to disable that service for now. So what's currently behind the fallback host name? So the fallback host name is get.jankens.io, in fact. So when we do, when you do the release, we upload the artifacts to an Azure file storage. And so fallback.jankens.io is the same. So there's nothing to use in archive right now. It's just a fullback, fullback. Okay, so we do not even need archives.jankens.io before all the mirrors are sent because that is occasionally a problem. I think that was Tim, what we worked on with plugins mid last year. When the mirror distribution was delayed, we made sure that it's immediately available from somewhere. And this is not implemented with the archives, correct? So for the plugin, maybe for the plugin, yeah. So Susan, is your file storage is fullback right now? So when you look, when you look at the script, sync.sh, you see that we upload packages into multiple location. We upload them to Azure file storage and we upload them to archive. The thing is in the past archive has had a limit on the network runway. It was really slow. And the purpose of archive was really like as a fullback service. So archive contain every artifact that we published since the beginning of the Jenkins project while most of the mirrors only contain packages for the last year, something like that. Okay, so it's not important for the distribution? No. So if it can be safely done, my preferred approach would be to remove that from whatever scripts would do the uploads to ensure that they finish with everything else that is actually needed. Yeah, it's safer in the current situation. Who can handle that? Marc? So I think, yeah. So I think what that means is that's just reinforcing that we should reapply the change that I had done a week or two ago to temporarily disable upload. Tim had mentioned it earlier. And I think that's just it. We need to apply again the skip archives change from two weeks ago when it was offline as well then. Great, okay. Is that, do you want me to submit the pull request and then Daniel, you and Olivier review it? Yeah, if you can submit the pull request, that would be nice. All right, we'll do that later today. It's going to be several hours yet. Is that okay? That it's delayed several hours while I'm at other meetings? Yeah, that's fine. My only question is, I'm not sure to understand which fix is needed. I thought that you had to manually modify the scripts. No, no, no, I pushed a change into the repository. It wasn't a manual change. It's really trapped as a PR. Okay, but you pull it on the machine though, right? Okay. Okay, I see which repository you mean. Okay. Okay. Thanks. Any last comments on this topic? No. The next topic, which is the way we build the current images. So as I was saying, Garrett and Damian made some progress here. So the ways working now is the reason, so the reason why they started working on that is because we start moving more and more jobs on the Jenkins running on Kubernetes because it's faster to treat the provision jobs, nodes on Kubernetes. But the challenge is because we do not run the Docker, because we don't have access to the Docker element. Yeah, we had to modify slightly the shared library used here. So I can, so I won't, I can show you what it looks like. So basically what it says, right? What we are doing right now is if you want to provide, if you want to add a new Docker image on the Jenkins CI for our organization, you just create a git repository with the name Docker dash and the name, whatever name you want to use. Inside you create the Jenkins five with the right function. And then it will automatically, it will automatically create a pipeline job on infrared at CI and it will run some LinkedIn tests of we're using Adolins. So we can, we can look at the results of those Adolin tests. And then it builds a Docker image using both the branches and the tags, which means that, oh yeah. Is that just starting Jenkins at the moment or can it do Jenkins as well? So let me, let me show you. Mark, can I show my screen, can I show the screen? It's just doing Jenkins in front of an image. Yeah. It has access to Jenkins, so we might be able to, so I've got a couple that ideally we moved over to possibly that. So ideally one of the thing that I see here is we could move many of Docker image build from trusted at CI to infrared at CI. So in order to increase the visibility of those jobs and it's just easier to manage right now. Yeah. So if I show you what it looks like here, between it's a new browser. I know it's already here. So I can just show you where is that? It's here, sorry. Do you see my screen? Yeah. So we now have a new folder, which is Docker builds. And if you look at it, we right now have a bunch of Docker images. So as long as you create a, as I said, a git request here with a name, that much Docker dash, it will automatically be added here. And so if you look at one, we can either build image based on branch or on tag. And so if we build on a branch, it will push the latest and the mass branch. And if we use the tags, we can also trigger for a specific tag. And so let's say for instance, this one. So the idea was to use release drafter to generate new version of the current images in this case. So the version would not match the different version that you have inside the Docker image. And so we do some ninting test build. But what is interesting here is we also have the view of addolins warning. And so you can see what are the errors found in this Docker image. And if you look, if you click on the different, so we see that this error is located on two lines. If I look at the Docker file, line seven, it suggests that we run apetek install with no install recommends. So we only install the packages that we need. And so yeah, that was a nice improvement. And then it automatically push the images on Docker Hub. So that's the current situation. The manual procedure right now, when we need to add a new Docker image is to create the Docker Hub repository in advance. Is that actually needed? I thought as an owner of the org, it automatically created a repo when you pushed. On Docker Hub? I thought so. Oh, that would be interesting. I think at least under my personal account, I think it does that. I'm pretty sure I can just... Yeah, so that would be a nice improvement. It creates automatically for you, even though it's a public one with the default configuration set. So depending on what we expect, we are not totally sure. So by default, if you expect it to be public with the default set, it's okay. Otherwise, you must create it before, as soon as the Docker needs a bird to push. Okay, so it's all... Okay, that's interesting. Anyway, we don't have private images and we don't have credentials inside Docker images. Yeah. So yeah. Now the question is more about all those Docker file that we're building where we have the Docker file next to an application. And the example that popped up in my mind is the account app where we have the Java codes and the Docker file. So we may have to manually add the job here. What I've been thinking is instead of adding jobs here based on the Git repository name, maybe having a way to, let's say, add based on a Git repository label, whatever. If we have a specific... There's a really nice... Yeah, there's a really nice pull request open like it have branch source for doing it by topic. Really efficient though. So maybe that will help, yeah. So we could have... I mean, it's just a way to visualize all the builds, but in this case, we can clearly see. That would be nice to have every Docker builds from here. So that was... Just to add something, there is an upcoming process that would come within the shell library that if an upcoming pull request that will allow to run some tests between the build and the deploy parts when building a Docker image, this test will be only using a driver named tar. So it will only check for the presence. You can only check for presence of files for file contents inside the content of the Docker image. It won't use a Docker engine for the security risk that it will cause inside Kubernetes, but it will allow to document and work on TDD when you have a Docker file image because you say, I expect to have this file with these properties with eventually these contents inside my image. So then you will have a kind of no regression testing for Docker images there. If we want to go further, we must use a set of agents that will be able to spawn the image on VM because if you try to run Docker in Docker, even though you have the rootless available, you still need some capsis admin rights on the host, underlying host. So the best solution for today will still to might be if you want to run some Docker workload to run a virtual machine dedicated only for that, that will imply stashing and unstashing the tar image before pushing it to registry. The goal here is to avoid pushing an image on any Docker registry without having run some tests. Ideally, I would like to add then security scanning on the static image, but already having some tests. So you have the links that helps you to parse the Docker file and find good and bad patterns. Then you build it with IMG. So no Docker engine involved. And then we use a Google container test for testing that will only deep dive inside the image. If the test pass, that will be run on each pull request. And if you are on the main branch, then it will deploy automatically. Security scanning shouldn't be too hard to add, what I thought, doesn't need to be in for that. But yeah, sounds really good. Yeah, that's another improvement that would be nice. Any last suggestion, question? No, so the next topic is about Mirrors. We start to need more Mirrors. Right now, we had issues with Severion over the past weeks, which has been unreliable. And so now I would like to maybe write a blog post to promote and ask contributors to provide Mirrors. So basically what I did is I improved the ham charts that we have right now to also install a nursing server on the Mirrors. The reason why we need nursing is because Mirrored Bits use nursing to gather metadata information for the files and to know if the mirror is there or not. So we just need to be able to reach from Mirrored Bits nursing server. So everything is in place. I just have one issue that I still have to identify why the mirror is marked as down, even if it's there. But yeah, that's the, what's the missing parts before promoting that component. So Olivier, I've considered using some Oracle infrastructure as a possible mirror just as an experiment. Would you be willing or interested if I were to offer a, it won't be what I would call in an interesting locale because if I remember they don't have a data center in India or in Australia, but it could be Phoenix, Arizona and the US. Is that interesting? Or is it better that we look outside for this initial experience? Is it already mature enough that you don't need me to test drive it? So basically right now we have a ham charts. So if you have a Kubernetes cluster, that's definitely super simple to install because the way the ham chart is configured is working is it starts multiple container, one container that provides you the content from a specific directory, just an Apache daemon. There is a second container which is a nursing that provides you the content from the same directory, also as a nursing. And then you have a third container which is a current job that's regularly pooled information from get the Jenkins.io. So from one mirrored, one trusted mirrored specifically. And so the first time you deploy the charts, you need to wait the time that it takes to download the packages to have a locale copy of one of the mirroreds. And then we can use it. If you don't have a Kubernetes cluster, if you want to maintain, I mean, if you just have a virtual machine, you can reuse either the containers on your machine, but I mean, at least you will have to do some manual operation. So I don't have a ready to deploy thing for your box. Well, but that's great because I would want to do it in Kubernetes. So I missed in the notes, the container with R-Sync, a container that pulls from get.jnkans.io on a schedule. What was the first container? So you have R-Sync, Apache, and a current job. Okay, got it, so. And so the idea is really, I mean, it's really to have it to work independently. And so the R-Sync daemon is a read-only server. So it's really just, I mean, it just provides information. And in the current configuration, it only allows connection from specific location. So the R-Sync is really designed to be used from get.jnkans.io. Oh, okay. Which means that if you deploy it on your cluster, you won't be able to test it from your machine. We have, I mean, yeah. And so we have dedicated Docker images built on the Jenkins project on inflighted CI and published on Docker Hub. So everything is to be there. And did we have anything on the serverry on mirror being unreliable? Do we need to notify them of that? Or have we just given up that we can't trust it and? I mean, I think I have to send them an email to report the issue. I think the problem is they just have too much traffic. They are not only providing data for the Jenkins, they also offer a mirror for Debian and other projects. So my guess is we just ask them too much. So I think it would be easier the day we just provide more mirrors in the same area than they did. Yeah, they just send an email once complaining that we turned them off. And I think we got to turn them back on at some point. So they did notice it's not one. The problem is, and that's the same issue that we have on our infrastructure. You always have moment in the day where you have a lot of traffic. Typically the moment between Europe and the United States during the morning, I mean, morning for the United States, you always have small peaks. At that time, and it's exactly the same that happened on the update center. And so if you only have one mirror, and in that case, I think you only have two or three mirrors in the United States. The problem is, and it's hard to detect, the problem is the mirror is there, it's working. And then for 15 minutes, 20 minutes, 30 minutes, the mirror is down. And so nobody is able to, let's say, install plugins, for instance. And so we just have to be sure that we have enough. Yeah, so their mirror is actually in the Netherlands. And one of the problems is it is incorrectly identified as being on the East Coast of the United States. And that causes all sorts of other issues. So for me, leaving it offline for now is fine. It's healthier for our East Coast users. They're not asking for data from the Netherlands that could be available from New York. Any last question regarding mirrors infrastructure? So the next topic we are running over time, so I'm trying to go quick. So we have few PRs that we affect infrared at CI and released at CI. So I would like to plan a maintenance window next week with Garrett and Damien. So we have great, we merge those PR manually and we upgrade everything. So we are sure that we are running the handbeat V3 handshots for Jenkins and a few other PRs. So basically what it means is we need to send an announcement on statues.junkins.io to announce that the service may be done. And also we have to announce the mailing list. So that's one of the things that I have to work with Garrett regarding that. The two next topics, we have one about acceptance tests. So I would like to use the Jenkins acceptance test to test Jenkins Docker images. I found to get repository, but I'm not sure which one I should use. But I propose to talk about that during the next Jenkins information. It's not urgent for today since we are already running over time. I propose to move it to the next meeting. And the last topic, which is highlight for the new year post on Lake, but Lake is not there. I don't know if, Mark, you have some inputs on that topic. Oh, just OK, just share information saying that you wrote a document, I guess. So sorry, here it comes. So there is a new year's blog post due to the Continuous Delivery Foundation, the 13th, so tomorrow. And this is the draft of that post. If we have specific things in infrastructure that we would like to highlight in that 2020 summary, we should propose them to it today. All right, I've put some things on infrastructure in here already. And like releasing Jenkins score? Exactly, like releasing Jenkins release automation, like all sorts of other things that are info related are in here already. But if you please feel free to review it. And if you think we missed something, this is the day to propose a change to it, because it's due tomorrow. OK, I'll try to look at it. So any last thing you want to discuss? Otherwise, I guess we can finish the talk here. So I'm going to three, one, two, three. Then thanks for your time, everybody. See you in RC and see you in one week. Bye-bye.