 All right, welcome to the Jenkins Essentials open planning meeting. I think today's going to be fairly quick. Now I'm here at some echo on the website. Did your Bluetooth headphones stop? No. So for whatever reason, now it's my headset that got disconnected. OK, we could look for some technical person to run. It's always one of us. I'm going to go ahead and mute you until you reconnect that. So I at least don't hear an echo. So you'll need to unmute yourself when your headset's reconnected. So we won't have Mandy today. I believe she's on vacation. And I don't know if we'll see Oleg or Jesse. So that's probably just you and I. See, that's still echoey. There we go. Oh, it should not be. So for, yeah. Anyway, let's just make with that. So I was looking at this just a few minutes ago. And so I'm going to start from the right to try and describe what's the current status. So the SSE gateway is basically something kind of blocked on the Blue Ocean side, because it's something we adapted for the new feature in Jenkins that lets you put logs elsewhere than in the Jenkins home slash logs. But I think Vivek is a bit worried that it would break something somewhere. So from what I understand with his last comment, I think he's kind of expecting us to bump it using our setup or incrementals. So well, I don't want to spend too much time incrementally filing SSE gateway plugin because it doesn't seem very proprietary. So right now, we have a skipped test in our code base specifically through that case, which is not great, but well, we have more pressing things in the plate. So that's kind of so well, if we want to move this forward, we will probably have to find some way to test SSE gates where it all fails or push by any way that thing forward. So for other things like, so that's great because for the other, if you come back to the board, about Jenkins 52, 584, so the wrong availability zone in essential for AWS. So that's a good and a bad news because so I think I fixed this yesterday just actually before Carlos reported it, but the very good news and nice thing is that actually somebody else than me played and actually provisioned something, provisioned the AWS flavor somewhere. And so Carlos discovered that issue which I ended up fixing like some hours before he reported it but he provisioned it the day before, unfortunately, I actually had screwed up the place where a field should be. And so unfortunately is as it's a string and not a strongly typed thing. It was correctly set by the config as code plugin for the Jenkins configuration, but only would show up when you would try to provision explicitly a given new agent. So and I discovered that testing again, I think Tuesday or Monday morning and the time spent fixing it and making sure things worked and Carlos was already testing, so unfortunate, but the rest work fine. For instance, the F3 plugin that I tested and Carlos instance was working fine, so that's great. Anyway, so it's deemed to be already fixed. And the third one is probably the most interesting those days and it's in review because the PR is filed and it's currently building, I think, yeah. And so that one is switching the code base from not fetch to using request promise and I'm using a thin wrapper on top of this to which is called promise request retry which basically integrates the retry kind of standard used a lot NPM module to retry things with the request promise also kind of standard NPM model to offer some higher and easier, nice way to retry things. So for now I've used three retries and I left the default behavior of the retry model as is, which means it's going to use an exponential backoff strategy. Going to retry first after one second, then two seconds, then four seconds. So we'll see if that proves enough or if you want to try maybe a bit more so that it goes beyond like 10 seconds or something. It could make sense. I don't know if you have anything about any opinion about that as well. I do, but I'll turn it over first. Right. And so the other things in flight right now so I've just filed the PR but so basically it's still in progress because I'm waiting for the CI completion but it's also in review for the AWS flavor testing CI. I thought I would do that later but finally I did that pretty much earlier because right now there's already a PR file by someone other than the core, I would say core team of Essentials filed by Jesse and it didn't have really easy ways to test these changes locally or in CI. So that's why I ended up doing that. And that's mostly it I need to get back then to the two other ones in progress. The SSH public key one is to never look from my side. I created AMIs manually as I said in the associated JIRAs to go quicker for now instead of using, oh my God, how it's called that tool to generate things be it AMIs or virtual box. I know that tool from Azure. Packer, Packer is good. Right, Packer, that's what it is. Packer is not going to be helpful to you anyways. Right, because I thought it would be helpful because I never used, right, good. I don't think your headset's going to work. Anyways, I'm going to mute you so I can actually finish your sentence. Is there not an AMI that includes Docker that would be helpful that's already supported and published in the AWS marketplace that we could use? That's not impossible, I missed it, but I searched for it just a bit and didn't find anything obvious. So maybe in the marketplace, but you know I wasn't sure it was better to use something from the marketplace, not really official, provided by someone, but not really more official than us or by AWS itself or by Docker, for instance, and I didn't find anything. So if we do, well, yes, it would be interesting, but anyway, I kind of suspect that then there will be Docker, but not Java, which is... Why do you need Java? No, I can't hear you, your headset. There it is. So why is Java needed? Why is Java needed? To connect an agent. Jenkins agent. You hear me? Yes, I don't understand. So how it's using under the hood when the EC2 cloud plug-in is provisioning an agent, a VM at the first stage, I would say, then it's going to connect to that agent, to that VM using the SSH connection and then it's going to run... To upload the remoting agent and then it's going to run it to connect in the standard way through the SSH pipe. So... NIC2 provisioned VM to be acting as an agent needs to have basically a GRE to be able to connect back to the Jenkins master. Does that clarify? So the AMI that the Jenkins Essentials instance needs only requires Docker. An agent AMI can be something different and that's what requires Java, correct? I'm not sure I groked the first sentence. Can you please rephrase? So the instance that's running Jenkins Essentials... Yeah, the master, right. Just going to be running a Docker container. All that needs to stop. Absolutely. However, the agent's AMI that the EC2 plug-in will provision, that requires Java. Yeah. If you unclick, exclude closed, if you grab them for AMI- you will find both PRs, both Geras, I did for both cases, which are actually two different AMIs. And I think I screwed up both. So I would be very surprised. I know on Azure, and that's where my experience is, Docker maintains a Docker on Ubuntu image in the marketplace. And that sort of equivalent would make sense for the master. I would be really surprised if there's not a Linux container, or not a Linux container, excuse me, a Linux image in the AWS marketplace that just includes the latest OpenJDK for the latest Oracle JDK. I wouldn't be surprised for one because Oracle is unlikely to well provide something like this because even the JDK itself has been problematic for years. Well, right, we will see OpenJDK. So it might be different. But then there might not be, or there might, I will have another look, something containing both. So yeah, your point is about the master. Yeah, for the master we could be able to use an official AMI. I agree. I was indeed still thinking about the agent ones. But we don't need to do both. Or like the AMI, we don't need to use the same AMI for both. This is not what I'm doing right now. There's two AMIs. It's what I'm saying. Okay. There are two AMIs, and I screwed up the same in both. Let's step back from this for a second. It is extremely important to me that we are not building an AMI that is getting baked into a CloudFormation template. We should be pulling something off of the marketplace because maintaining that base AMI is not work that I want to add to the Jenkins project. Do you know what I mean? You've made an AMI, and the next patch level for Ubuntu or Amazon Linux, that AMI is now going to be vulnerable and out of patch. Right. But how would this work then? For agents, we will probably provide some way to update the EC2 configuration. And for the master, I guess it would be standard AWS update, I guess. I mean, I see what you mean. Anyway, we don't want to be the one responsible for publishing the updated AMI. Yeah. Right. Right, right. I will just reopen the two dedicated AMIs because I created something like create a dedicated AMI for a master, create a dedicated AMI for the agent. So I'm going to just reopen both and add some comments in there to do it, to search more aggressively. And I think that's mostly it. And I was about to actually revive some hack I did, some weekend, some like now it's going fast. But yeah, two months ago, I think, I was talking about Esquid using Esquid because the good thing about the PR that is in flight right now is that it's using request promise. And request promise has a nice feature which not fetch didn't, that it should be respecting out of the box the standard environment variables, HTTP proxy and so on. And I had already some hacking available on my machine that would locally start an Esquid proxy, so that we could hack and develop more quickly instead of waiting forever to download plugins and so on when we restart the whole thing. And right now I'm using a low bandwidth connection and it's very painful. And it will be helpful for you in the bus. Yes, hacking on the bus would definitely be easier with the local proxy. So the only other thing that I was curious about is you've got a couple things around the auto configuration JEP stuff. I'm curious if you've had any time to start on that. So this is the third one you see in progress and the JEP is already filed and I think you commented on that one about the deletion. So right now it's, well, it's not blocked per se. I need to just find a spent and spent more time making sure I address every single comments when I actually think I did for some of those but I didn't hear back. So I guess I just need to push, to be slightly more pushy to actually get answers on the things that I deem addressed. I guess. Is that kind of clear? Yeah. And then there's the top left one which is, I'm not really sure what I'm going to write for that one because for now it's getting mooter and mooter I guess for the 51877. Yeah. Yeah. For now it doesn't seem to make a lot of sense with the experience we gather right now writing some kind of generic JEP about how we are going to do that specialization but at the same time it might be something we will be able to write about and yeah, when Mendy's will have completed the 51766, the safe flavor because indeed we are going to have some way to pass from the client to the server the fact we are getting a given flavor. So the other question. No I don't. There are some thoughts that I have that I need to think more about before I ask questions. The stuff that Mendy's been working on and that's going to be in flight until next week I believe. I think she's out this whole week. I started on the migrations container so the plan and I don't remember if we reviewed this at all but the plan is that for the deployment in the Kubernetes environment we are going to have a separate container that just runs the SQLized migrations and seed and then we can run that. We did discuss that. Yeah. So I started putting that together. That's pretty simple. Great. And then assuming that we can get trusted CI which is the Jenkins environment that's building containers to properly run the Evergreen tests then we'll be able to get those containers published pretty quickly. Seems like if you're referring to the fix you did some like two days ago or something seems like it's working because sorry what? It did not fix the issue. I think what may have fixed the issue is Olivier nuked all of the agents that were connected to that environment and allowed Jenkins to reprevision some. Because I see that the backend and the Evergreen image has been provisioned yesterday many times already so it seems to have been working at least a few times. Yeah. That's what Olivier did. Not what I did. Unfortunately. Because I thought you were referring to like the issue that was like one week ago when everything was stuck. That was. Yeah, that's not what I'm referring to. I'm saying that it was deploying until yesterday in the afternoon. All right. But what I'm saying is the issue of all of the agents being stuck in trusted CI Olivier fixing that issue seems to have been what fixed the tests completing properly in our pipeline for trusted CI. Okay. I didn't change anything yesterday. But Olivier did. I think you merged something, didn't you? I reviewed a PR from you about rewriting or refactoring some tests also. Was that yesterday? I don't think that was yesterday. I mean maybe it was like two days before but then it would possibly make sense because it was starting to deploy around like you know two days ago so roughly it could have kind of matched. Anyway. Yeah. And I don't have the access to trusted CI so I'm really widely guessing. Yeah. Yeah, unfortunately getting that environment configured for safe access control is very, very difficult. Yeah. There's a lot of release keys in that environment. So the once that migration container is running properly then we'll just be continuing on some of the Terraform that we need to provision the Azure PostgreSQL environment and then it's a pretty straightforward Kubernetes configuration from there. Cool. Do you plan to actually end up doing the squashing we talked about a few times? I've thought about it. Mm-hmm. I don't think I care enough to do it. It's going to be pretty time-consuming possibility. It takes two seconds to run all those migrations. It doesn't really matter. Right. It's just that well, I mean anyway it will get messy again in the future I suppose but yeah it's just right now a bit messy to actually figure the model of everything without actually dumping the database and have a look. I know that many had to do that some days ago and yes looking at those different files you have to either mentally build that thing that model from the the DB model or yes dump the DB and have a look at what's currently available. I added a nice convenience make target for that. Yep. Exactly. And I think there was even something somewhere to actually log into the thing. Yeah, that's in the readme. Yeah, right. That's it. I remember. I used that one for first. I didn't go on. There's a hack that I was doing on the weekends that I need to commit to make it easier to use node and the node modules directly from your shell while actually going through the Docker container. I played around with that so you don't have to say dot dot slash tools node do this thing. So I'm going to wrap that up and get that submitted today. And I also spent some time poking around with TypeScript over the weekends. It actually works fairly well with feathers. I was kind of surprised. Someone already did the work about a year ago to make feathers and TypeScript work well together. I talked a bit with Mandy about it last week when we were just chatting one on one and I don't know how useful it would be right now for where the project is. But I think for the models and for some of the services side of the world, I think some type checking could be useful. It looks fairly straightforward to incorporate. Maybe when I'm awake some nights with nothing to do with sit around and wait while I incorporate that. Then you'll be able to have your nice type checking again, which I know is crazy. Yeah. I mean I'm kind of for whatever reason in my career I ended up doing mostly type kind of typed languages even C even C is more typed than JavaScript. Yeah. But having to run the thing to actually even just for instance I'm running tests for instance the code doesn't even compile or it's not even runnable code and just will run happily and say that test failed. Well, yeah, that test failed because the code isn't even correct. No. The test that didn't fail. Everything is just broken. You should just when I'm doing that in Java I'm indeed used to when everything breaks before it actually reaches the stage where tests are run. But anyway I'm kind of starting to feel more and more used to it. So it's nice. Anyway, I guess that's not the place to discuss my ability to move forward with Node here. And I actually find my first open source PR node library today. Wow. Congratulations. Yeah, I know. That's an achievement. You're officially a node hacker now. Exactly. I already sent an email to my mom. Getting back on topic, assuming my meeting load this week actually looks a lot better than it did last week. So last week I was only able to get started on this ticket. Assuming the Azure provisioning with Terraform of the PostgreSQL database goes well my fingers are crossed that everything just works the way that the documentation says. It should be straightforward to get this up and running. I might need some help from Olivier later in the week to make sure that the migrations container is running correctly. But it's right around the corner. Now there are these two tickets that Mandy had around getting some database consistency. I had asked her to try to get that incorporated before we go production. So even if I have Evergreen.Jenkins.io online by the end of this week I think we'll still have to call that like alpha and be ready to blow away the database if we need to. Yeah I guess it makes sense anyway and I've been running I will shut down Evergreen.Jenkins.io.Batman.net in the go I guess. By the way JC you are here I just saw that the PRI I filed to test the AWS favor just completed and it's green. It's great. I think doesn't the basic check doesn't run for me You mean locally you already pulled in the PR you mean? No I mean the basic check and obviously this check doesn't run Yeah yeah absolutely that's what I commented on the PR I think saying indeed it was already failing before I filed this anyway because I think you're missing some dependency somewhere from the log I mean even after I fixed the plugin issues if you look in the full request even after I fixed the plugin issues the made target fail doesn't seem to have anything to do with I can have a look anywhere second look did you comment there in the meantime we were in the meeting okay 12 minutes ago indeed I guess you'll just merge 146 and then we'll see if my stuff runs on CI but I don't seem to be able to run it locally and I probably will try your PR locally to see if I understand something okay gotta go I think we're wrapped up here so I'll see you all along get her yep bye