 Welcome to the Jenkins Essentials Open Planning Meeting. I think Mandy may not be joining us today, because she's indisposed. So I might just be Batiste tonight. So this will be pretty quick. So Batiste, I was in here last week, so I saw some work that y'all had done, but could you just give me an overview of what you and Mandy were up to last week? Sure. So basically last week, the main thing that was done was to finish and get in the merging of the AWS Flavors. So it's now available and working, so using the online to use AWS CloudFormation to provide and provision everything. So I filed issues around, and so we've been having some kind of discussions and work around config as code, artifact manager, F3 plugins, EC2 plugins and many things. Because for instance, for now, if you look at the PR, indeed you will see that for configuring out of the box the artifact manager and F3 plugins I had to mix using config as code for now and some kind of init.groovy.d to actually wrap up everything because it was not possible to configure everything out of the box. We've got a look at that right now. Yeah, that's the one, and there's somewhere a reference to an ongoing ticket where actually, by the way, I filed PR this morning on not to fix PR because I didn't have time and I'm not sure I would be the one to spend time on fixing or making artifact manager F3 plugins and config as code work together, but at least I filed a PR showing that they don't work together globally. And so the thing I'm now on is so this morning I filed the PR and sent an email to the Daily Mail list to gather feedback around and then rub the JEP around WUS auto configuration so that's the JEP summarizing what I actually did in the prototyping PR. And now the next thing I'm going to jump on is the long hoped one, also one of the ones that made our tests some kind of somehow flaky sometimes, which is the retry failure feature. Hopefully it will make our runs over locally and in CI more stable because sometimes it just fails and it doesn't retry so everything is just, you have to restart the CI build and so on. And yeah, that's it for me. If you have any questions, I'm happy to get into something specific. For this ticket in particular, the handling failures gracefully, there is some node modules that I was looking at a couple of weeks ago, which might be interesting for you to look at. I thought I'd put those in here. I think I was going to say, I think you put a comment already. Okay, good. Right now the client is using this module called node fetch, whereas our acceptance tests in the services side are using a module called request and request promise. I think it might make sense to look at just changing the client over to use request promise rather than node fetch, because node fetch looks like it's a very simple API because it's implementing an API supported by browsers called just fetch, which is relatively, I don't want to say primitive, but it's just not as advanced as it could be. But it was the only easy thing that I had pulled when I started on that code in the client. I imagine that the code will look like some kind of lambda using request promises and so on, like high-level programming model can just say, okay, that premise should be retried five times before we abandon or something, and then so, yeah, that's how I imagine the code should look like. One of the things that I think would be worth experimenting with is there's different types of exceptions that will go into the catch state for a promise. Not all of them are download failures. For example, a 404 is not a retryable, at least in my opinion, 404 would not be a retryable error, or if you had a syntax error in your promise or the function being executed by your promise, that would also not be a retryable error. Yeah, for the 404, I tend to agree, but sometimes we know that, you know, because of the mirror or infrastructure, sometimes some mirrors can be out of date for whatever reason and so 404 could sometimes be actually legit to retry, but yeah, I kind of agree with the general idea. The way that the mirror network operates, I doubt it's going to be updated within the retry window anyways, is that the mirror network is going to fairly consistently to the same manner on multiple requests. Yeah. Anyway, I guess I think you're, I mean, you're definitely right, and I think we can be a later optimization on that part, like forcing some other mirror if we can do that, I'm not sure, but yeah. Okay. And I think that's it. I had something in mind, but I forgot that's for later anyway. Do you have a link to the, to the configuration as code to get that you mentioned? This is linked from some Jira link from, so that's another one that just, so that's just when you stumble into it, the remove port 50,000 is interesting because it's actually being triggered by the feedback of someone from the community, Damir Krabov, yeah. I'm the JEP PR, so that's, that's one of the good thing about, you know, submitting the JEPs for the big reviews and fetching issues. The issue about config as code is going to be just the easiest is going to go on to the config as code repo and look for the pools right now, the latest ones, I guess, or there's only seven open. So here you can see what the config as code looks like for S3, but you can see so the part that doesn't currently work, so it's going to be at the end because the rest is testing our infrastructure. So yes, the part that doesn't work is the commented one just on the bottom, but the S3 block store config part is actually what's already used in the other PR on the European side and is being actually used in production and works when you're using CloudFormation template. All right, so I'm going to go ahead and paste this into the Gator chat. All right, I see that, Baptiste, is there anything else or should we go on to Mandy since she is trying? No, that's fine on my side. So Mandy, I was just going to ask Baptiste to sort of catch me up on what was going on last week and what y'all did in my absence. Yeah, so I have the prototype for the century integration. I wasn't sure what we would consider done with that if we wanted anything that would actually merge back into the code base or if we just wanted to leave it separate for now. I'll see why we can't merge it. Well, I would refactor it slightly before merging it. Right now, it's a little bit hacky and I would want to make it more of a sort of standalone library that we would use as opposed to having it like integrated fully in with the code the way it is right now. But if we want me to move in that direction, I can go ahead and have that done probably today or tomorrow. What do you mean by standalone library? Move a lot of the calls out into a separate class so that we can have the third party dependencies littered throughout the code. Yeah, that seems reasonable. And yeah, I think it makes sense to go ahead and do that so we can merge it. Okay, so I'll move forward with that so I can close out this ticket. And then after our conversation yesterday, I'm moving forward with the updates we need to have the flavor in. And so with that, that'll probably be done either today or tomorrow as well. Just to be sure. Yeah, go ahead, Baptiste. Is there something up you would like us to review? I'm not sure. I don't think anything. I don't find anything for now. My goal is to have at least one pull request ready by the end of the day today, my day. Right. Thanks. One of the things that Mandy and I talked a bit about yesterday was consistency of our database. Mandy, I don't think you filed the tickets for that yet, but would you mind just giving an overview of what we talked about and what you're going to go forward and do? All right, so there were two things that we had talked about. One is that what is in our database does not match our model definitions in the code. The big one that I had noticed when I was researching the instance flavor was that instance belongs to updates. But the database does not have any of the foreign key constraints or the not null constraints that if we had allowed SQLize to create the database that would have been in place. So one of the things I'm going to do is go through what we have to find in our models and make sure that our database matches the model definition, including all the relationships that are in there. The other thing was, based on my previous experience with databases, especially with Postgres using sequences, I've had issues in the past with sequences, especially if you have to, like, do database backups and restores with the numbers getting out of sync. We have had much more experience using the primary key as a unique GUID as opposed to using a sequence value. And so I was going to go ahead and create a ticket to look into converting our database over to using GUIDs instead of IDs. If that's something that you would be able to tackle this week, that would be ideal. For me, I need to continue on this ticket around deploying the backend services to Evergreen.genkins.io, which means I would be provisioning the Postgres database to Azure to actually run the Evergreen services. I think I should be able to get to that fairly quickly. Okay. If you ping me with those tickets when you create them, I'll just block the actual go live to production on those being changed. We start from a good data model. Got it. For me, I was out. Obviously, last week, this week isn't all that free and clear as far as my time is concerned, but these two tickets right up at the top, the Pusherservice and then deploying Evergreen.genkins.io, those are the top two priorities for me from a Jenkins Essentials standpoint. They're both fairly straightforward. Where I left off with the Pusherservice is with Feathers, we have a fairly strong event system to where it's easy for the Feathers, which is the framework that powers our back end. It's easy for that to just admit events over a WebSocket to connected clients, like the Evergreen client, for example. I've done the initial research on that. I didn't make a comment in here, so that's my bad. It looks like that will work to where whenever we have an update level created in the back end services, Feathers is already actually admitting a created event, and we just need to expose that over a socket to the client and make the client actually do something meaningful with it. That's what I intend on continuing to work on this week in closing out, and then I'll get the initial pull request up for some of the back end services for the Jenkins infrastructure project. Okay. So I think next week, Amanda, or Mandy, sorry, what would be nice to sort of be prepared to do is if you can demo some of the century integration next week, I think that would be cool. And then Batiste, I don't know how far along we will be, but being able to demo some of the launch on AWS, I think would be really cool. Yes, there's already a video recording for that, and also there is, so I forgot to say that earlier, but I've also filed a PR on Jenkins.io blog to talk about that. Is the video linked from the blog post? From the PR, yeah. But I mean, in the blog post, you have... It's linked from there. All right. Video, sweet. All right, never mind then. You're ahead of me, Batiste. Ahead of what? In what sense? You're thinking ahead of me thinking of things, which is great. I haven't heard from Raoul in a bit. I'm pretty sure he's working on other stuff at CloudBees, so I'm not sure how quickly he's going to be joining us again or how soon. I don't have much expectation right now that we'll see Raoul in the next couple of weeks. And what else is there? Oh, the incremental publisher, which I know is very, very important to us. Yes. We had another case just a few minutes earlier about what I would like to have it available. They're definitely... It's definitely broken. There was something about blue ocean testing, which I can't file PR, you know, incremental lefthying blue ocean, which would help, but yeah, I can't do that. And I've been doing that on EC2, but it was actually moot because, yeah, it was broken, so nothing got deployed. I'm not sure it's normal if the deployment is still green, everything is looking like it's okay when it's... That is the behavior. Right, because you don't want to break PRs because incremental doesn't work or something? Correct. So what's going on with that right now is I've gotten a duplicate Azure Function deployed, and I still have the open Azure Function support request for the sort of current one that's being referenced by the pipeline global library that we use. It really looks like something behind the scenes has completely corrupted the application. So nothing that we did. So I spent probably two hours on the phone with a support engineer yesterday from Microsoft, and I seem to have stumped her, which is not a great thing. I don't want to be coming up with problems that support engineers can't solve. They got over the same thing with Olivier last week about CID and Kensington I also, yeah. Yeah, we're very good at plus testing Azure. So the plan that I have today, and I consider this to be a higher priority than the other tasks that I had referenced, we've got the parallel infrastructure. I'm going to convert the pipeline library to use that parallel infrastructure and continue the support process with Azure. But we should be unblocked within the next hour or so because I just need to switch some URLs after I verify that they are working correctly. Great. Yeah, apparently this, I mean, I'm glad you filed the ticket for this petition that apparently it was broken for a couple of days before this. Yeah, I think I wrote the date. I think the last time it worked it was around the 26th of June or something. Correct. That is what we found out from Azure support yesterday as well. It's pretty kind of easy to see because if you look at the repo.incrementos you can see that the last deployment was around that date. So we'll see what happens with them on this. I might look at moving this out of Azure functions. Yeah, maybe. Somehow this is the only function that seems to be unreliable and it's the most important one. Were you already using it somewhere else? I thought so. Yeah, we use Azure functions within the Jenkins project for some little bits of community automation. It's quite helpful for doing little webhook-based bits of logic. But in this case, the things that are working against us is we're using a newer Azure runtime or Azure functions runtime because we needed Node 8 or later. The other functions run on Node 6 which is super old and doesn't have promises or any of the good stuff. Battle-tested. Yes, it's like Java 5. It's battle-tested. Yeah, or older. Yeah. And this function also takes a little bit longer to execute. The usual runtime is around one minute just because we have to download data and upload data and things like that. But if this happens again, I think I'm going to, or if this sort of behavior happens again, I'll have to look into alternative deployment mechanisms for the incrementals publisher. But I'll keep you all updated on that because it is pretty critical to Jenkins Essentials. Somewhat. Yes, somewhat. All right, I think that's it. Do you have any other topics we should cover real quick? Just having a quick look at the PR they have open. Maybe that will trigger some thought. No, that's okay. Mandy, anything for you? Talk about everything. Nothing additional for me. Okay. Cool. Well then, thanks all for joining and I'll see you in Gitter. Bye-bye. See you next time.