 Hi everybody, so let's start this new meeting for the, so we don't have a lot of topics to the agenda, but first just a few things that I did last week. The first one was to upgrade JIRA. So we had to renew the JIRA license last week and now we're running the latest one. This is something that we should do more systematically because it's quite easy to upgrade to the same version, a major version. But yeah, that's, we basically were affected by security issues, but now it's fixed. And otherwise, except that that's, I think we can just look at before each topics. So the first one is the IBM infrastructure. Jim, do you want to explain a little bit here? So I gave access to you and Mark access to Z, which I think you guys just got the other day and power that you guys had for a couple of days so far. Hopefully you guys can figure out adding SSH keys to the Z as my colleague didn't do that. He just gave you a general SSH key. The major issue we're hitting with it, I guess, for you guys is using Puppet to manage infrastructure. Puppet looks like they stopped supporting builds for S390, I think sometime in 2008. All right, sorry, 2018. And then I think they still support the power, but in a limited capacity as it looks like it only applies to Ubuntu 16.04, which does not help us sense the image that you guys got for power is Ubuntu 18.04, which is lovely. So I guess the discussion we need to have is how we're gonna proceed with managing the configuration management basically on that machine. I know briefly in the email you guys mentioned maybe wanting to use Ansible for it. And to kind of address any of the concerns, Ansible does work fine on S390 and PPC. That's what we use to manage all our infrastructure over here, but Puppet does not. So how do you guys wanna kind of proceed with that since we could port Puppet, but that will require a lot of additional work and might not be 100% sustainable in terms of like, if you guys need to update your main Puppet Master then it might fall out of sync. I don't really use Puppet that much. So I can't really speak to all the ins and out, but I know that it will require a good deal of work. So the reason why I mentioned using Puppet was because we already have the Puppet code to configure the Jenkins agents, but in the end, the idea is just to simplify the management of those machines. So if Puppet is not correctly supported, I'm totally fine to use Ansible or any other tools. The thing is we just need to create a new Git repository, for example, if we decide to use Ansible for that, we just need to create a new Git repository with Ansible code used to configure those machines. So I think Mark is doing some testing right now with those infrastructure. So I think we can, because it does not, I mean, we don't have a lot of configuration for those roles, so it should be really easy to put that in place. But yeah, I'm just wondering if Mark, you are working on that, if you mark your familiar with Ansible are not at all. Because if you... I have no experience with Ansible whatsoever, and it shows because all my configuration I did with shell scripts. So you're welcome to do whatever works best for Ansible, and I trust your choices. Yeah, absolutely. So yeah, I did a lot of Ansible code so we used in the past. So if you can at least work on a script, if you can at least work on a script that we can use, so we can first configure those machine and then once I have some times I will write the Ansible codes. So we can, if it's needed, we can redeploy those machine in the future and we can reconfigure those machine in the future. The main reason why I really think it's important to have configuration as code for a virtual machine is because it allows other people to modify. So let's say they want to install a different GVM version of they want to change SSH key or whatever. It just is here in the future, but yeah, because the script is not that big for those machine I think as a first iteration we can already just configure the machines with the right SSH keys and then work on the Ansible codes later on. Yeah, 100%, I'm all for configuration management. I definitely know the benefits and I applaud you guys for even doing it because it's really awesome. And I think sooner than later we'll also see the issues with, we'll see the use case for having this kind of Ansible code is the power of PPC server we gave you guys that isn't the long-term solution. That's just a temporary server for you guys. And I think I have a talk on Friday going to legal to get that terms of sheet, terms of use sheet for you guys for PPC. I have you guys sign off of that because you did sign off on the Linux one but we'll need to have you guys sign off on the Power one. Okay. So that'll be perfect if we develop some sort of Ansible playbooks and then we're able just to snap at your fingers, redeploy all the Ansible playbooks on this new server that'd be amazing to see. The other major, I don't know, is there any more we want to say on the whole puppet versus Ansible kind of thing? Yeah, good. No, just, I think, I mean, I think that if puppet is not correctly supported on those architecture and then we don't have to waste our time. Yes, okay. Yeah, I mean, we have experienced with getting puppet to kind of work. A couple of clients were interested in puppet too but the big problem is the dependency kind of hole you kind of go down into. Puppet is a colossal kind of machine. I'm not saying that Ansible isn't, but we looked at it and we started working with it. But it requires to go to like pretty much every open source community under the sun and be like, hey, can you guys start producing S390 builds, please? Which is my goal but I don't really have time to do that right now. So yeah, I think not wasting our time hopping into Ansible would be a good way to go. Okay, so yeah, so once you can show, so once I can have access to a script, let's say to install the OpenJDK9 or whatever, once I have access to a script, I can write those and skip the Ansible code. Yeah, and we might be able to, I don't know how good the Ansible scripts are. I haven't personally looked at them. We might be able to actually use the adopts OpenJDK's Ansible scripts because I think they have to manage their infrastructure, their Jenkins infrastructure. They use Ansible to reconfigure workers and stuff like that. So they might actually have some Ansible playbooks and roles that we might be able to utilize for installing adopts, doing all that good stuff. So can you have a look to those? Yeah, I'll post a link in this document. I think I actually have the tab open somewhere. I'll do that after the meeting. Okay, perfect. The last thing that Mark was just talking on RSC about is the whole compilation on Z being a little slow. Mark, you ran into that issue the other day, right? Yes, yeah, and I've got, while you've been talking, I've been compiling and I've seen evidence to support exactly what you said, Jim. So it looks like the most compelling thing to use on Z is OpenJ9 and it feels dramatically faster than either adopt OpenJDK with hotspot or the K that's bundled with Ubuntu on Z. So it's pretty simple in this case, use OpenJ9. I'll run a bunch more tests, but the execution performance is visibly and dramatically different using OpenJ9. Yeah, and there's actually an issue I can link you to on Adop's GitHub page where they kind of go over, I think someone actually ran into a similar aspect of you. They were using Adop JDK, just plain hotspot and with no JIT, the JIT is just in time compiler. And what's happening to explain the slowdown issue that you're experiencing is the Java 8 from Oracle is not shipped with the JIT compiler. So what happens is everything's running in interpreter mode so that it's just a lot, a lot slower. I think it was solved and added into the source Java 11 and up. So you could run just OpenJDK from Java 11 and up to get that JIT press or 90. But this is the exact problem, I guess, originally came to you guys to solve is getting you guys, your Docker images switching from OpenJDK to adopt OpenJDK because this is like all my Z friends and IBM coworkers, when they go to compile your Docker repositories, they spin up Jenkins at works, but then it just terribly sluggish and slow and it's like, it's not Jenkins. Jenkins is fine, it works very well. It's just the compiler underneath is just not right. It doesn't have the JIT. So that switch would really help solve some of these issues that you kind of ran into. Well, so just to be clear, I didn't see visibly different performance from adopt versus Oracle versus OpenJDK as bundled with Ubuntu. I don't know if that's oracles or not but OpenJDK has bundled with Ubuntu on S390 feels, now I haven't done precise benchmarks but my perception was it's not dramatically faster. Whereas if I use OpenJ9, it is dramatically faster. Now I got some test failures that hint that OpenJ9 may have things that need more investigation as well. But yeah, it's definitely not perfect. Yeah, I know in the past, people ran issues with the bundled JDKs that come with canonical or even rail or stuff like that. The Docker official repo, someone raised the issue and hence why we don't see Alpine, we don't see other kind of base disher abutions in the adopt official image right now is because they were pulling from, originally they were pulling from, and also I think Oracle, this is why OpenJDK official image doesn't support S390 or any of the other ones anymore is because they were pulling from canonicals or basically Ubuntu's built binaries which the official image team didn't feel comfortable with because they don't know 100% what canonical did change with it. So they wanna basically keep propagating that out to all their users. Right. Yeah. But anyways, that is the problem you ran into. I'm glad you kind of experienced, kind of see the difference between those two. But I'll be on the IRC to help you out with any other issues and get you up and running. Great, okay. Okay, I think it would be nice to write a small blog post about this because I think it's an interesting scenarios for other people as well. Yeah. Yeah, that'd be really cool actually to see. So next topic is marked apparently you want to talk about the Seattle Jenkins attire upgrade. Yeah. So I just need to schedule some of your time, Olivier and I suspect tomorrow morning, my time is the best hope. Are you available tomorrow morning? I'd like to have you tutor me through how do we do an upgrade of ci.jankens.io from one LTS to the next LTS. Yes. So I'll do that. There is some run books regarding this. I have to check. I have to find those. But yeah, we can definitely do that tomorrow or after tomorrow morning your time. Perfect. That's all I needed for was to just get your agreement that yeah, you're willing to tutor me and Mark to invite. I'm pretty sure that Danielle Baker wrote something about that and it's in the search organization. I have to find a link. That's okay. You don't need to do that. I can do that chasing happily. It's a good chance for me to explore. If I have failed in my exploration tomorrow morning you can show me then. Okay. But usually when we decide to upgrade ci.jankens.io it's either because there is a new LTS available or for security reasons or whatever. But regarding the plugins, it's usually like if you want to upgrade a plugin just think with Danielle or if you know that we need ci.jankens.io for specific reason because we know that we want to test something or whatever. It's usually best to just ask if you can do it. But most of the time, if you know that you have enough time to fix issues then just upgrade the system. I mean, on an imaginary basis the jankens.io project is more like a best effort. So if you have the time to fix something just fix it. But that's usually how I work. If I can plan something I plan it and otherwise I just work on it when I have the time. Yeah, I've been very comfortable. I post an IRC that I'm about to upgrade plugins, wait for 15 or 20 minutes to see if there are any objections and then I just go ahead. Yeah, that's the best way because if nobody complains about it yeah, just upgrade the system. I mean, it's way better to have everything always up to date than just fear to break things. So yeah. Next topic is regarding the backer image. So team Ja, which is not here today. So team Jacob work on backer images for Seattle jankens.io. So right now we are just generating open to image. The good thing is it's beat up the process lots because once the virtual machine is started it's already ready to work. We still have to re-control the process. It seems like each time we change the image we have to update the Seattle jankens.io configuration which is not reconvenient. Basically I enabled those images last Friday something like that. That's why we had issues over the weekend with Seattle jankens.io. Another change that I did regarding Seattle jankens.io was to update the credentials. So in the past we had one user who had owner access to the Azure accounts. And so now I just changed that to have contributor access. So we can just manage resources but we cannot access the resource themselves. And the next time that I work on permission I would just reduce. So I'm sure that credentials, configure and Seattle jankens.io only have access to specific things instead of having access to the whole accounts. But yeah, I did some work on the permission management on the Azure accounts. And the other thing that I also did is I started working on groups. So the idea is to have some, yeah maybe I can. Let me see if I can share my screen with you. Stop recording, share. So while you're doing that Olivia, I was more concerned about the making sure that I understood what happened that caused it. So it sounds like you made a switch to packer. The switch to packer had positive results in that things start faster now. They start cleaner because we've got more initialization done in the virtual machine and that's great. There was an error in the Docker compose installation, a mixing between docker.com and others. Do we have evidence that that's resolved the issue now or we're still in the process of switching to use the new images? So it's not, so the, so, sorry. So the change that I did regarding the packer images, the way CID of Jenkins.io was configured in the past, it was just each time we need a new virtual machine, we use a default Ubuntu image and run a script. And in that scripts, we installed Docker, we installed all the things that we needed, so Java, Java, etc. So we just have the scripts and no, that script is executed directly from the packer image. So once we start, instead of using the default Ubuntu image, we use the packer image generated on CID of Jenkins.io. And so everything is up and running. So it does not, I mean, the Docker, the Docker compose issue is maybe related to the packer because we are maybe not using the same image, but yeah, it's not related to the work that I did last week. If I can share, I'm almost there. My computer is really slow. One person then can share, I can't share. Oh, sorry. Can you see my screen? Yes. So basically what I did is I started, I created at the moment two different groups. One for with packer permission, and so every person in that group can read the resource group that contained the packer image. And I created a second group for public communities. And the idea is if someone is in that group, that person can also access the resource group that contained the communities cluster. So this is something that I started working on last week. I could not automate that yet. So it's really like for specific use cases. And so the idea is if we need it, we can invite more people in the account, but and so they only have access to specific resources. That's something that I would like to see more in action in the coming weeks, but I still need more testing to be sure that, I mean, it's still useful. Yeah, do I have any question regarding this? Nope, that seems reasonable. Yeah, and so yeah, sorry, yeah. So I had one quick question. So you mentioned that the initialization script has kind of been moved into the actual VM image. I was wondering, there's still stuff in the initialization script setup in ci.jinkans.io for the agent. Does that need to be removed? So this normally should be removed. It was just a workaround. So if you look at the script used by Pacara, it installed a bunch of things and then removed the user Jenkins. And so what happened when at the first iteration, we were not able to use those Pacara images because we didn't have the right permission. So basically the script that I configured last week was just to say ensure that the Jenkins user is there and just put the right permission. But normally now it should be fixed. But because the Pacara images is generated from ci.jinkans.io, if ci.jinkans.io cannot provision machines, then we cannot generate new Pacara image. You see them. Gotcha, so that's why we do. Yeah, but normally we should be able to remove this. It has been fixed last Friday. Okay, cool. Yep, let's continue. So regarding the Pacara issues. So yeah, as I mentioned regarding Pacara images, as I mentioned, it would be nice to have windows image as well. The main reason to this is because yeah, when the Windows machine started, it take quite a lot of time. The thing is if you have some time to look at it, it would be nice. I think you can just, if you look at the open to example, you should just be able to insert the script that you are using in ci.jinkans.io. So you should be just. Yeah, that's what I started to do. I should have a pull request later today, hopefully, that has the same setup as the current Windows 2019. Okay, awesome. Awesome. Awesome. So next topic, status report from Olivier. Oh, no, no, no, no, no, no, no, no, no, no. Next one. Okay. I gotta worry about that next one. So I just got an email message today. I assume you saw the same thing that effective tomorrow, the SSL certificate on accounts.jinkans.io is being revoked. So I can check that the good thing is if the certificate is not working, it will be automatically generated. So I'm not too concerned about that. I have to check if it's the right certificate. The reason, I mean, so basically we are using a tool called search manager to generate the certificates. And so each time you go on one of the websites or running on Kubernetes, if we have a valid certificate, it does nothing. Otherwise, it generates new certificates. Okay, right. And so I've just confirmed, you don't need to check anything more. I just confirmed that the certificate on accounts.jinkans.io is issued Kubernetes Ingress Controller fake certificate expiring in 2021. So we are fine. So we can ignore that. Thank you. Yeah, I just have to check if there are some upgrade that we need to do with search manager. So search manager is the tool to generate, to request the certificates. That's the only thing that I have to verify. Where is that? Okay, back to confirm not been issued. Um, change. Checking on search manager. I know search is from KDS, Kubernetes, not search bot. Okay, for me, status report from Olivier and others. So those are just notes in fact, sorry. Yeah. Everything else in the meeting notes are just notes I had taken while we were going. So I think we covered all the topics for today. So unless someone wants to bring something new in the discussion, I think we'll just, we can finish there. I'm still back to the Packer topic. In for 24.95, I think is not quite yet resolved. So is the hope that we'll resolve it today with the change to use the newer image? Yes. Yes, that's correct. Okay. So, and this is not one that's critically dependent on Olivier, Alex, you know how to do this. Oh, very good. Okay, great. Basically, what you do, if you have admin access on Seattle Jamestown, I don't know, but you look at the build for that Packer images repository and there is a timestamp that will be in there. And you need to go into the Azure VM agents section. I can show you, I can, it will be easier. So let's create this one. So basically, each time, so there is, in my case, so you just go to the resource group. So just search for, I think, Packer. You just search working from, yes, sorry, it's here. So if you search Packer, there is a resource group called Packer images and inside you have multiple images. And so basically, you just need to use the latest one, which is in this case. So there is a timestamp. So this one seems to be the latest one. And so you have an ID. So you just have to take the resource ID and then you can go to Seattle Jamestown.io. I will not look in now, but you just have to go to the cloud configuration and then you can update the ID. So the next time the virtual machine is provisioned, it will use a new Packer image. So something that may happen, if you know that one virtual machine is totally broken, you can try to force delete. So that's something that I did last weekend. So you go to virtual machines, you have to use the right resource groups. So either you look in the Seattle Jamestown.io configuration for the right resource group. In this case, I know it. So I would just directly go to there. So set it all, agents three. So this is the resource group use for the Seattle Jamestown.io virtual machines. And so you have a list of machines here. And as you can see, we are creating four right now. So it's in the status. But if you know that you changed the Packer image and you really want to use those Packer image, something that I do is I just look at Seattle Jamestown.io. And if I see nodes that are not used, I just delete those nodes from here on the Azure. So I'm sure that the next time a job needs a new virtual machine, it will provision a new one. So that's something that I did over the weekend. So first, first update Seattle Jamestown.io with the right resource ID, and then ensure that you don't have a machine available to, and so delete all old machines and then Seattle Jamestown will create new machines. That answer your question. Does, thank you. Yeah, awesome. Stop sharing. So I will have to run. So if, yeah, I think we can continue the discussion in RSE. Thanks for your time and see you on RSE. Thanks. Thanks.