 All right, welcome to the Evergreen Open Planning Meeting. This is the last one before DevOps for a Jenkins World 2018 in San Francisco, where Baptista and I are going to be talking and demoing a lot of Jenkins Evergreen. Unlike my usual approach, I'm not sharing Jira right off of that. So I think I'll go ahead and start sharing my screen. Hey, and we have a Baptista. So welcome, Baptista. I'm thinking that it looks like Baptista has the, I'm assuming I can't use GitHub. Have the snapshot stuff been merged this far? So I kind of split that thing into many steps. And so yes, part of this has been merged this morning, I think. So right now, and I tested that in production on my public instance and so on, now on each update, you will see that commits are created. If you go to the Evergreen slash Jenkins slash home, you will see that if you type Git log, there are as many commits as there were updates. So the thing is, right now, I don't use it yet for real things. But I liked the idea to merge this because, well, as it's kind of isolated, and I anyway need this to be able in the future to reverse and so on, it kind of made sense to have this merged and see if this causes any kind of issue in production anyway. While I'm still so locally working on, so now I'm right now working on health checking, maybe I could file different GROs to clarify if it helps people to understand what I'm working on right now. Basically, actually, I kind of had to forget about that. But JAPS 302, which is the snapshotting thing, depends also on the health checking one, which is the one I'm kind of right now implementing. So I'm writing some kind of health checker. And to be honest, I'm slalloming right now into inside my beginner level in Node.js, so in Async. And I think they call back hell, trying to understand what I want to do. But I hope to be complete in that area roughly later today or tomorrow morning, hopefully. And I already have some, oh, OK. Sorry, my daughter is playing with my mouse. I was wondering why the screen was moving. And so then I will have, and I will use some code bar for that, but I would need to think about how to test that and so on. To wire the update process with the failure or success so that I can trigger a restart or a revert, which right now is not, by the way, implemented. So I would trigger a git revert using the snapshotter class and so on and so forth. I get that's kind of summarizing the current state. So what I'm hearing is that I should probably take the other tasks if I can get to them that are assigned to you. Does that make sense? If you think, I mean, I guess you would like to get an overview. Yeah, but I mean, depend on if you already have even more critical tasks in your place already. Yeah, my expectation right now is that a lot of the stuff that is assigned to me is a little bit smaller in scope than what you and Mandy have. So I can turn through these really quick between meetings and then maybe take the smaller tasks from the two of you if I get through some of mine. Yeah, and so I am yet not sure, but definitely the thing in the middle for right now Mandy, but Mandy is probably busy on the one on the bottom, I think. But basically, what I've seen in production this morning again, and with the back end we thought was updated and would not have this kind of issue, seems to be the exact issue with that. Basically, it's telling me that I have no Docker cloud available and blah, blah, blah, probably because it actually installed something that made Docker plugin broken from the ground up on an upgrade. So yeah, we really need to understand that one because it's going to be breaking for everyone during Jenkins World and so on. So the work around for this, I mean, we could revert the revert I had done for the shim I had pushed some now two or three weeks ago. You may remember I had pushed a shim that would force the Docker plugin version as the same one for both, for all things, be it the core or flavors so that this situation wouldn't happen. Because what happens right now is that basically a plugin gets installed for the Docker cloud flavor and the Docker plugin or the Docker commands, I think plugins gets installed in 1.5 instead of 1.9, I think, minimum. And so everything just gets screwed because the Docker plugin just isn't able to start and that's it, everything is broken. So that's probably one of the most critical things to understand and to figure out. Mandy, if I understand correctly, you're working on the client deletes right now, right? Yeah, I'm focusing on the deletes because I tried coming up with some tests to reproduce the version thing. But based on all the tracing of the code, I haven't been able to reproduce it. It keeps giving back the correct version of the plugin. OK, so what I just took onto my plate, let me pump. Did I find that? There you go. This, I think, at a bare minimum, even if we can't reproduce Batiste's issue, I think in our test suite, now that we've got some public images, I think we'll want to have an acceptance test that pools the latest image from Docker Hub and then runs that and then runs whatever we build in our test environment and then make sure that it comes online properly after that. So I'll tackle that assuming I get to it before Batiste finishes this stuff. Yeah, it's going to take time. I don't think I will wrap everything in the next 12 or even 24 hours anyway. But I'm very optimistic in general. Yeah, so that's the point indeed. We should probably have some later stage in our PR that would use kind of a public image. But then, well, how to test that? Because this case is really, from something existing, it fails, right? I mean, it's really because my instance was working fine this morning when I provisioned it from scratch again and again after we fixed the redirection issue and so on. And during the day, I merged another PR and everything was scratched. So something is wrong with the update, but we are not able to reproduce that locally, seems like. So yeah. Yeah, so we'll see if implementing this test case will this expose it and not make it solvable? By the way, any way possibly better long-term fix for this might anyway be to not fix that or maybe we got to that at some point, but to fix that at the inges.json level and not at the update.js level meaning. Right now, when you're generating an inges.json, basically you could see, let's say, a Docker Commons version 1 in the core. You know the core.spec, I think, if you open the essential.json, for instance. And you would see, yeah, that's, by the way, going to be visible even in the essential.json, because it's before it's rendered into inges.json, but that's the same. So if you look, for instance, for Docker Commons, I think, in the status, yeah. So you see that Docker Commons is needed in 1.9 for the Docker Cloud environment. If you look at the status, you will see that it would be 1.5 for Commons and for core, for the thing that never gets installed as is, but it's a default, and overridden 1.9 for production for the Cloud flavor. And I think it doesn't really make sense, because it means that in production, we will have, for instance, for AWS and for Docker, different versions of the same plugin, which I think is going to create a lot of necessary headaches. So possibly, and would fix, probably, the thing even better, we should probably fix this at the ingest, I mean, in the essential.json or parsing level, so that we never, ever find that situation. Basically, we should try to find the upper bound and force it in the status.com and core. This is, if we want to keep this meeting short, then we can't really get into that topic. I think we can pin that for now. But there's significant problems with taking the upper bound for dependency that's shared between environments when other things are not going to be shared between environments. So we can pin Docker Commons to 1.9 in the base level to address this issue, but taking the upper bound across any flavor that's available, I think is problematic. I think the contrary for the single reason that, then, it will mean that for a given update level, depending on the flavor, we might have different versions of the same plugin, which I think is going to create issues and necessary headaches again with our support channels, I would say. But then I understand what you're saying. We will have to make sure that when you have the upper bounding things, then we don't create another unstartable issue for another flavor. But anyway, so I can force it. That, by the way, go. Let's just force it, forcing it for now. I'm going to revert my revert. OK. So Mandy, how are the deletes going while we're talking about busted plugins? I'm learning all about interacting with the file system with Node. That's fun. Yeah, that. My last test never actually finished, so I'm trying to figure out what I did wrong. Like, it just keeps going, just never ever stops? Yeah, it's just trying to do a thing where it's supposed to fail because it doesn't find the file name, and it just never actually came back. So I'm not sure how to figure out exactly what I'm doing wrong there. So one of the behaviors that I've seen that might be helpful for you to look at is, if ingest in particular, ingest if there are unresolved promises that get exposed to the test infrastructure, then it tries to wait around until those promises resolve, or at least that's what it looks like. So there's two ways I think you could try to solve it, either figure out where you might have a promise that's not getting resolved, or you could also try adding the force exit flag to ingest, which might. We do that in the back end for reasons. And basically, the SQLized client doesn't get cleaned up properly. That's the reason. I don't know how to fix it yet. So that might at least get your test to stop and give you a failure. I'll keep playing with it and see if I can figure out exactly where the promise is happening, because it's not obvious. OK. If there's anything, no schedule, after some Clarby's meetings this morning opens up really to where, if you want to pair on stuff, just let me know. If I'm still stuck, I'll let you know. I just want to let you know, I took a ticket back from your plate, since I figured you'd be busy fighting with deletes today. I noticed. So the client. Yep. Of course, if you get to that before I do, please take it. OK, I will. And I would encourage you all to take that same approach. Like if you find yourself out of task or need a contact switch to a smaller task, feel free to take something from either me or the other person that hasn't been started, just so we can get the things done. I don't really care who does them, not at this point. So anything else, Matisse, that you think we need to discuss before you get on an airplane? Is that a no? It's cutting off. I'm not sure you said something just right now. Do you have anything else we need to discuss before you get on an airplane? The airplane is on the end of the week, so I guess before maybe, but that should be OK. Anyway, I shout and cry if something is needed. OK, that's good. And yeah, this sounds good. And then just, Nondi, will you do the same shouting and crying if you need something? I'll shout. I'll leave off the crying for now. I'll leave that up to Matisse. I appreciate that. Cry like a man. With that, I guess let's get back to work. Yeah, see you all later. Bye-bye. Bye.