 Welcome to the Jenkins Evergreen or Essentials Open Planning Meeting. I figured this will be fairly quick because it's looking like it just will be Mandy and I today since Batiste should be off having a nice relaxing vacation, as the French are want to do in the summer. So I'm actually pretty excited this morning Mandy because we've got things so close. They're just so close to being done, at least as far as Milestone 1 goes. So what I wanted to walk through just really briefly was to be informational for you and probably for Batiste who will be watching this later, I presume. I merged the code related to the Essentials YAML processing. Disregard this, I've been working on addressing that. So let me open up the Essentials.YAML that's in the tree right now. So this Essentials.YAML is now conforming to I think it was JEP 309 which describes the Bill of Materials for both what we're doing here and then some of the work that Oleg and Carlos are also working on. And in this Bill of Materials, we're basically just using this YAML file to describe here's the components that are going to be going into the sort of core environment and then additional environments which we refer to as flavors in the back end. And all of that gets realized, quote unquote realized into a list of the actual, like a fixed list of the Bill of Materials. So that's what's actually been implemented here in our tree now. This code all lives under Services CLI. And what I decided to do is I didn't want to maintain a big giant list of prospective dependencies. Up here in the spec, we have the versions of plugins we consider essential to the Evergreen distribution at sort of a top level. So we're not listing everything here, but we're listing things at a top level which includes the workflow aggregator just to pull in those plugins, Blue Ocean, the Essentials plugin. We have a couple incremental plugins here. And what the CLI will do is it will read this file and then generate this status which is the realized version of the Bill of Materials which includes all of the dependencies. And based off of Jesse's guidance last week, I changed this to where it doesn't rely on the Updick Center except for there are some cases where we need to take an artifact ID and discover the corresponding group ID and that information is conveniently located the Updick Center. So we only use the Updick Center for artifact, group ID, pairing, lookups. But other than that, we're actually going to Artifactory and let's say for example the artifact manager S3 plugin. We go to Artifactory and we fetch the manifest.mf file which is located in the version 1.1 of that file. And that manifest.mf file has the plugin dependencies and the required versions listed. And we compute all of that. And then we take the latest version of the, you know, let's say we have five plugins which depend on the Ace Editor of varying versions. We take the latest version 1.0.1 that's required by let's say the artifact manager S3 plugin. There was a nuance that Daniel Beck had actually helped point out to me which was if a new, so let's say we'll just use these two plugins as an example. The AWS Global Configuration has an optional dependency of 1.0.1 on Ace Editor and artifact manager S3 has a dependency that's non-optional on say, you know, 0.08 of the Ace Editor. We need to take the optional, the latest version available whether it's optional or not of those plugins. And then if everybody says that this plugin is optional then we filter it out. So that was one of the things that I had fixed last week. So this is, you know, the reason that I had merged this change is because this is actually fully realized. This is the full bill of materials for the current head of master for the Evergreen distribution. And what, there was a bug that I had actually discovered in our previous handling of the Essentials YAML and I'll point that out as well. So in the Docker Cloud environment we say we need the Docker plugin for example and so we have to resolve the dependencies that are in that environment. Let's go down. And what I had discovered was that the Docker plugin pulls in a newer version of a dependency that's located in the base set of dependencies. And so we need to overwrite that. So anything that's got the, you know, anything that's being provisioned with this Docker Cloud environment, it's going to actually pull in a later version of token macro than what the base set is going to pull in. And of course if we wanted to pin token macro 2.3 then we would just need to add it to the spec up at the top. And to support this there are two, there are two make targets which have been introduced. One is generate Essentials which will take this spec and then generate the status section of the bill of materials. And then the other is generate ingest which you may have seen my more recent pull request which is going to ensure that we have the, you did see my most recent pull request. Make sure that we always have the ingest file that can be uploaded into the Evergreen backend. And so the big change that happens with this pull request is that the ingest is really the collection of URLs that are going to be given to clients. And the bill of materials is something that our tooling can work with independent of that ingest.json to run tests and to do all sorts of other interesting things. But that's all done. That was very difficult to do because building a dependency resolver typically requires a lot of recursion. And doing recursion when you've got asynchronous or promises floating around in Node was a little challenging for me. But if you look through that code, Mandy, you'll see a lot of promise.all forcefully resolving some promises before we bounce back up to the next layer of the call stack. And that kind of sucks but it's done now and hopefully we don't have to touch it for a long time. Does the bill of materials make sense to you? Yeah. That's something I know Oleg was waiting for me to provide final feedback on. I think this JEP is now ready to be accepted. So we say this is the official bill of materials format and the custom war packager, which Oleg has built for some of his testing work, the acceptance test harness, which also relies on this format. And then Jenkins Evergreen will all have this same identical format, which will make things very useful for testing a distribution of Jenkins, which is Wunderbar. We haven't had that before. So that's really big. And then what that leaves for us is really sort of preparing the AWS distribution and then closing out on some of the other tasks. So I guess I saw you've got code for this, the update unit tests to have messages. I mean that is just purely test cleanup. It was something I ran into when I was troubleshooting the test for the flavor feature where some of the tests didn't have output when they failed. And when you had multiple tests in the same test method, it was really hard to tell what was failing. Oh, for sure. If you recall, I had to have you and Batiste help me parse some test output last week. So I've got every single assert has an explicit log message so that if we have failures, you can at least tell exactly which line failed now. That's fantastic. That'll be very helpful. Part of why that'll be very helpful, and this is something that I am really excited about with the dependency resolution stuff being done, is if we change the Essentials YAML, for example, and include a plugin that doesn't have its dependencies or that fails on startup, then it actually, that gets caught by those smoke tests because we're only going to know if it fails when Jenkins tries to start. So having those clear messages there will make it obvious or more obvious when Jenkins has just failed to start because we screwed something up. Yep. Excellent. Yeah, and so there was one thing I did add in here. Yeah, I noticed that wasn't there earlier. These were, this one, this provide documentation for the Essentials client server and then write it and the end user docs to start at Dr. Instance. Those are both things that I had that were in the unassigned bucket in Milestone 1, which I think are important for Milestone 1, and I figure we could just knock those off pretty quick this week just to keep things moving forward. I think, but for you, unless there's something that you're really passionate about, it might be wrapping up Milestone 1 soon. Looks that way. I don't know if any of these tickets that are assigned to Batiste, if there's something that you can help out with. I thought this one, the retry client calls, I thought we did this already. Oh no, those are the API calls. So we retry on downloads, but we don't retry on API calls. Yeah, if you'll take a look at those tickets to see if there's something that we could close out for Batiste since he's on vacation. Actually, this one, I think I can close out. This has been incorporated. I'll put in a segment too many. Yeah, this is looking really, really good. I think Batiste will have to write the documentation for how to use AWS when he gets back. I don't know, have you worked with that at all? Not for this product, no. I know Carlos had done some stuff for this. I just don't know if Batiste has a cloud formation template floating around that we can use for end users right now. I don't think I even have access to that right now, either. Access to which? Any kind of AWS account. So the Jenkins project doesn't have an AWS account. So I think Batiste has been using his CloudBees work account for that testing. But I don't think that there's, I don't know if there's anything specific to what he's done that would require access to his AWS account, for example. As I saw, he merged, or he had proposed the change to rely on the marketplace AMIs recently. And that makes things a bit easier. Yeah, I saw that for request. Have you worked with CloudFormation before? Not directly. I've mostly been dealing with just the ECG aspect of it. Okay. Maybe we can go Carlos or somebody else into helping us out. I'm not familiar with it either. But I think as far as I see it, the things that are standing between us and completely polishing this off for milestone one and being able to close this out. Some of these AWS tickets, we need a CloudFormation template which we can link to from Jenkins.io slash download, for example. Some documentation to make sure that it's clear how to use this thing. This automatically deploy ingest.json. I thought this morning of how I'm going to do that. And that'll just make sure that whenever we contribute changes to essentials.yaml that the actual evergreen.jankin.io gets updated. And the only other thing that I was actually about to create a ticket for was, well, there's two things actually. One is just going to update the index page for evergreen.jankin.io. It's actually the last five or ten update levels. And hopefully I'll have to look at what we have in terms of data model. List the actual, the clients that are connected so that we can all just look at that and see what's going on. And then the second thing, and I'm curious if you have any thoughts on how we will be able to do this. We need some way to taint an update level from a developer standpoint. So let's say you update a plugin and put that into the essentials.yaml and that gets deployed out. And then you start to see a lot of stuff in Sentry. What would the ideal workflow be for you to go back and mark that update level as tainted? We're not tracking the plugins separately. We're tracking like the entire update as a whole, right? Correct. So that update would be, as a whole, that update level would be burned at that point. I mean, the simple way would be to have a flag to mark something is bad. But you would have to go through and mark every single update that had that burn, you know, that plugin is bad. In the data model there is a tainted field on the update level. I'm just making, you know, the expected flow that I imagine is, as a developer, you would submit a pull request to update and let me go. On the screen, I'll just merge this. Yeah, the build wasn't done when I looked at it, so I was waiting for that to finish. Yeah, sorry. I mean, what would you pull requests against them? I would create a suit. Wouldn't they only live in the database or are they going to exist as a file every time a change happens? So this essentials.yaml is what will be driving the changes here. But that's driving the first set of updates. Like after that, are we still maintaining that file one that's going to be existing in the database? Yes, because what's in the database is different. What's in the database is basically the ingest.json, which is the list of all the URLs to actually distribute these things. So let's say I'm Batiste. I've made a change to the essentials plugin and I've got a new version. I would create a pull request to the Evergreen repository to update this line. So, you know, 0.4, something, something, something. Our test would run. Someone would say, yes, this looks good. We merge that to master. Merging that to master will result in a new update level being posted to the Evergreen.jenkins.io. And then clients would start downloading that. Assuming that, you know, Batiste has made a mistake in this scenario. Then Sentry would start to light up with errors. And we would need, like Batiste or I or you would need some way to say, you know, update level 17, which would contain the commit that changes this, is now tainted. Don't give that to anybody else. And so options I've thought about is making Evergreen.jenkins.io, you know, putting a developer dashboard there to where you would sign in with GitHub. And we would just use a GitHub team to manage that access to where you could just click a button that says this update level is tainted. Another option that comes to mind is like a CLI. I'm not sure I like that. I'm personally a fan of the CLI just because it gives you a lot more options for automation. I mean, there's an HTTP endpoint where we can, like, I can write you a curl command right now that uses a pre-shared key to mark an update level as tainted. But I don't think that's the most usable across the board. Another option, actually now that I think about this, is if we... So let me think through this. So if we have a commit that represents this change to essentials.yml, if I commit a revert of that file change, I can actually set up automation because we have a lot of GitHub webhooks or webhook functions that are running behind the scenes for the Jenkins project. I could set it up to where if you just did a revert and in the commit message you just said tainted or something like that, then when we receive that commit in the GitHub webhook, we could run that change automatically. That curl HTTP change. Yeah, it would have to be a combination of knowing that it's a revert plus a keyword so that you wouldn't accidentally grab something extra. But I like that idea much better because automating the steps so that nobody accidentally forgets to do a step. Right. Right. That's... Yeah, I like that. That also means that we don't have to share access with anybody. I don't have to go build a GitHub application. Yeah. But those keep things a bit more simple. Okay. So I'll file a ticket for that and I think I'll be able to take a whack at that pretty quickly because for this commit, for this ticket, I've got to add some new functions anyways. Let me just show you real quick. We actually have a repository of community functions in Jenkins Infra and all of these functions are, these are Azure functions that are auto deployed whenever there's a merge to master here. So it's actually, you've seen the comment logger. You've no doubt been annoyed by the comment logger with broken tests. That's just a simple web hook function that grabs logs. Fairly straightforward. I won't take it too far. But if there are other community, there are other automations that you think might be helpful for Evergreen. This is a really great place to put those automations. Okay. Yeah. Maybe finish that up and then we'll document a little bit and then we'll pop some champagne and say that milestone one is done. Sounds good. And the other thing, I've got to file a ticket for this and send information out to the dev list. I'm going to change the name. I think we talked about this a little bit before and just make this absolutely clear. Just call it Jenkins Evergreen since we're definitely focusing a lot more on the automatically updated part of this right now. And the user behavior is not something we've spent a significant amount of time working on. But Evergreen I think will be just much more clear for people. Essentials conveys an idea, at least to me, that doesn't represent the value of what we've built here. Because I think what's been really valuable is the automatically updated distribution part. But I'll send that out to the mailing list today. Keep forgetting about it. Anything else from you? Any other questions? The only thing I've thought of is getting, because I keep dealing with the flavor aspect. Do we want any of the jets to define what valid values are for that so that there's no potential for confusion there? Or are we fine with the way it's defined? I think it's fine the way that it's defined. I think what might need to be updated is to, I think, in JEP 307. And will you please take a look after the call. JEP 307 or JEP 303? JEP 303 that mentioned status actually isn't explicitly defined in any of the jets. Yeah, that makes sense to me. Yes, because I have looked into that and the possibility of updating it. But we don't actually have it called out and this is an explicit API endpoint. It's actually just reference in front of the paragraph. I think JEP 303 might need to be updated. But I don't think we need to define in any of the JEPs what flavors are valid. Because that's going to change as time goes on. Or I hope that changes as time goes on. But I think the relationship of the environment in the essentials.yaml to that flavor should be defined in 303. Okay. Yeah, I think that would be helpful to define. And this reminds me, I was going to just write up an explanation of how dependency resolution works for the essentials.yaml. So close to a usable system. Definitely. Well, it sounds like we've got some more stuff to do. But I think it would be reasonable for me to have a blog post ready for Jenkins IO by the end of the week. Given where we are right now. Which is exciting. That sounds good. Next we'll get back to work then. Yep. Alright, I'll talk to you later, Mandy. Thanks for joining us.