 Hello everyone, welcome to the Evergreen Open Planning Meeting, early February 2019. So we wanted to cover a few things this week, not a lot of purely linkable giras, but there's, I think there are at least two to three outstanding subjects, and I think the first one is probably the kind of critical one. I think Tyler, if you had a look at the ongoing issue around the upload that's not working, so basically now we, since like a bit less than a week or something, when we merge some PR on the core, it won't get uploaded to the Evergreen Judging.io website basically. Right, let me double check on that. So the Git plugin changes that had been released and then pulled out of the update center, those made that API change that I had mentioned in Gitter yesterday. And so I deployed a change yesterday, but I didn't have a chance to follow up on it for the Azure function, which handles sort of grabbing information to perform that upload to Evergreen.Chankins.io. If the dashboard would load that would be great. I had a look at the Evergreen Judging.io and so it's not updated yet, but what I'm wondering is that this, do you know of the top of your head is if your change needs like a new first merge or deploy to be exercised, or should it work right away, I suppose? It will need to be triggered, so what I can do right now, because I actually have this open from yesterday, so I'll re-trigger that right now, but this is where I'm feeling not so great about these Azure functions is when we don't really have an easy way to know whether something has failed. And so long-term, I think this functionality might be good to put into a different web service or like an admin web service for Evergreen, but we should be working properly now and I'll just verify that while we're talking, which means we'll have some big updates coming. Yeah, it's even more critical because I think it was, I didn't double check, last time it got upgraded, I think it was also for security issues, so yeah. Unfortunately, it'll take a little while for this dashboard to refresh, but the changes that I put into the function yesterday will be, they're backwards compatible with the released version of the API data that the Git plugin provides and assuming that Hudson.util.build.details doesn't change much as far as an API goes in the Git plugin 3.0, it'll be forward compatible as well, like we check for either member in the JSON response. Yeah, I had a quick look at the code around build details and build that data. So it'll take a little while for this dashboard to update, but I'll mention in the channel whether or not it was successful. So just so that I make sure I understand what you're saying, you're talking about the dashboard inside the Azure management UI or something? Yeah. Okay, and that thing is going to process something like some webhooks that would have been queuing or something? It'll show me the logs. So the webhook I'm actually checking on evergreen.junkins.io now? Yeah, because if those things were in the worst case, or is this the only possibility, you would have to manually force trigger a master build of Evergreen from inside Trusted or something? There's the upload. I can manually trigger an upload from my local machine. I'd have to go remember how to do that. But I'm basically waiting for Azure functions to show me the logs for the request that I just sent, because I'm not seeing the information that I would expect in our dashboard. So I'm just waiting on that. But what we can do to manually trigger this is merge changes to the essentials.yml. But I don't think we need to do that right now. That's a good point. Anyway, I need to probably bump also to 2.164. So anyway, we do not necessarily need to kind of have some fake change, because I have an actual one to do because that's what is going to be required when we reach the point where we would like to bump probably sooner rather sooner or later to bump the JDK8 to JDK11 inside Evergreen 2. And given the version that really was patched recently to allow that without specific switches, is the 2.164, because the 163 was already well-working. But we have had for quite some time now a few switches in Jenkins that actually crushes immediately at the beginning, saying, well, you don't use the right version of the JDK that supported. Please use another one, because we had so many issues in the past with people basically using the JDK that would come in by default in their distributions. And though they would screw up their install, so now it's stopping. So what we did in the last week in the core of Jenkins is that now it will allow you to run on JDK11. That's a kind of a silent launch, so anyone watching would be aware, but that's not a big deal, because this is the goal of a silent launch. We want some people to be aware to have a kind of progressive ramp up, so that's fine. So yeah, we need to at least do that on Evergreen 2 to use the right core version and be kind of ready. So yeah, bottom line, I have something that's actually a good change to do, and we would serve as a merge when it's ready to trigger the upload, possibly. Well, so I've got a little bit of time right now, so I'm going to make sure that this hook works properly, and I've got to roll out some certificates. But after that, that's kind of the extent of the Evergreen work that I'll be able to do today. So I can review changes as you got them, but that's about all I've got. Yeah, that's fine. I mean, if you can indeed, I think that that's really, I mean, that's obvious. Focus on the part that cannot really fix myself on the infrastructure or whatever and make the upload work again. That's probably indeed the highest priority right now, because then we can really deliver on the core premise of Evergreen. That's that sort of dating. If we can push upgrades. Right. Another few subjects. I had two other in mind, but I'm not remembering. So there was one. So yeah, the first one was the thing I merged last week, or was merged last week, around an issue we actually had in the code that would process exceptions, Java exceptions, in a way that we would use in century, the tool we use for log aggregation. Basically, we would receive something like Error and like their stack race. But because of that, we, and because of like month ago, I had at some point marks as ignored the expected exception we know about the smoke test one. That's when that's started. That's used to actually test the pipes basically from clients to server. But given the way we were handling exceptions, that that's by the way why you might have seen that with verencing, we weren't seeing any exception in the end in the logs. That was before because those were all aggregated together and ignored. So that's awesome. So yeah, bottom line now it's fixed. So we started seeing a lot more exceptions and a lot more error messages. So that's good. Sorry, what? I noticed that. Yeah, right. So yeah, these shows, by the way, there's many, many cases where things are failing, but that's not an issue. That's a general issue. Like for instance, on the Cloud Cloud side, we see that the fact that the plugin tries to remove containers after builds run happens quite often. At least we see that. Well, we can't really know how what's the ratio that thing happens, but it does happen. So yeah, and I didn't take the time to triage those because I wasn't really able to work. So I started really a bit more this afternoon. And so that was the third thing I wanted to talk about quickly, the third subject. So I started having a look at providing and working on something after a discussion we had in the chat some time ago on providing some kind of CLI for Evergreen to provision into in the future. I was thinking interact with Evergreen. Because for instance, right now, it's still quite complicated, but it's quite acceptable complex to provision a Docker Cloud instance, for instance, but it's really complicated to be able to provision another US one. So it's feasible. We were talking about that to do that through the UI at some point through a button. But once the thing is working, for instance, it's not really easy. For instance, if you want to retrieve the password, if you want to retrieve to see the logs out of an Evergreen instance, then you have to have something that's not really, I feel, matching the simple experience we wanted to offer to our target users. And so what I have in mind is that I started experimenting with, so using, started in Go, for the reason that that's both, it is pretty well, I feel... You don't have to justify the use of Go for a CLI. It's 2019, I think people get it now. Exactly. But the other thing is that there's two things. The integration that's pretty sleek and kind of native with interacting with the Docker API locally. That was just very easy to do. I'm almost done with that, basically. And the other thing is that I've heard that there's a few APIs that I'm not sure because I thought it was more Python, but for interacting with AWS. And I expect that nowadays, indeed, back to what you were saying a few seconds ago, it's 2010, 2019. And I expect Docker to be providing a lot of very up-to-date APIs with regard to interacting with all major cloud providers, for instance. And so I imagine providing something like... So what I already wrote is something like Evergreen provision-f Docker. And it was provision things or dash s in the future, AWS, for instance. And then at Evergreen, show passwords or something, the command we already put in the docs, but then it would be allowing us to automatically switch between the two flavors in the future, possibly others. Evergreen logs, like Docker logs, but more agnostic. Because in the AWS, you have to jump first, jump in SSH and everything. So yeah, I think it would be interesting to be able to offer that for even easing the usage of Evergreen for users, basically. I like the idea. Depending on where you are between just playing around or actually building something, would you be able to just put the scaffolding of a gap in place on like, here's the functionality that should go into the CLI? Yeah, definitely. So how that CLI is going to communicate with Evergreen daemon that's running in the environment or how it gets its data, that's what I'm actually most interested in. So how does that CLI know? Like how does it authenticate or how does it know how to get the data that it needs to from the Evergreen instance? So for an hour, I didn't experiment with like, for instance, connecting to AWS. But what I did and what I have in mind is that I basically used the scaffolding provided by a tool called Cobra, which is like one of the most used CLI library plus scaffolding tool in Go. And that one also is integrated pretty well because it's the same developer, SPF13. So Steve Francia, I think is a guy working at Google. But basically, there's also an integration with another one that's called Viper, and it's also integrated with a YAML read file. So what I imagine that in the future, I would probably read that file for the cases where I need authentication. For instance, in the local case, I probably can do without. But for the case where I need to go through, let's say, AWS or something, probably I'm not sure absolutely yet, obviously, but either there is a config file that would be written also by the tool itself to update once you've provisioned something, for instance, or the other way around. But yeah, basically using some kind of like dot, dollar home, home slash dot evergreen dot YAML would be something I imagine would be the where you store your data, basically. It seems to me that it's a pretty intuitive and common way to do things nowadays. Yeah, that's why I'm leaning towards right now. Yeah, I mean, it sounds reasonable to me. I don't want to add any roadblocks. So just let me know what I can do to help or bounce ideas off of me in the channel. Yeah, I'm going to try and finalize something simple, even if it's because I'm really newbie in Go. So really slow. For instance, there's no test right now, so I'm not sure how exactly I know it's possible. I've already done a workshop from some time ago, but yeah, it's pretty, it's for now a bit beyond my reach, but I definitely plan to try and push something in a PR at least somewhere, probably either in a separate repo for now, but then merged maybe at some point. Well, I imagine that at some point we would merge it as a kind of a third top directory, top sub directory inside the Evergreen repo like called CLI or something, where we would merge the original repo or something, but then, yeah. So right now we already have something that started to work is indeed to provision the instance, but only locally using Docker API. This is super cool what I'm watching right now. So the deploy just worked. And so I'm just refreshing on evergreen.jenkins.io and watching the total number of instances at update level 176 increment up. Like when I first loaded the page, it was at one, and now it's at seven. Let me hear my screen. It's super cool. Let's do that. Yeah. So for people watching or watching afterwards after recording, you can see there are seven instances and now 12. That's so cool. You can see here that that thing is actually lowering for reasons because that thing is the one where instances were running on the previous update level. And if we refresh, we see that that thing, oh, that's going to be too less here, two more here. That's absolutely great. Yeah. That's a great. So that bug is fixed? Yeah, that's very fun. I actually never had done that. I already know the afterward case or something, but that's indeed very fun. Now, we definitely have to work on the other outstanding issue. That's basically why the heck we have a few things here. Right. I thought with the fix of the unique constraint that would get addressed, or is that not correct? That's possible. Because for instance, I had a look and that did a pretty long analysis like two weeks or three weeks ago. I spent almost my whole day trying to play with the Sentry Curve API, trying to build a better understanding. Because using that, I was able to quite quickly go through hundreds of logs and understand what was sending what. And I ended up seeing that we were seeing some, basically a lot of instances that would be stuck. And some that would actually have, like in the, we have a table since Jenkins word or something, because it was when I implemented the rollback. And I saw there was a basic instance that was keeping, trying to upgrade, crashing, getting back. But the thing is getting back cannot be done right now. It could not be done before you remove that constraint. Because getting back would mean trying to insert in the DB the same versions as before, which would reject you. Right. So I would hope, and that was kind of the design goal, that even if you, for a very, very reason, you're trying to upgrade to the latest UL, then you get back to where you were. Then next time you're going to propose, you're going to be still proposed the next, next one. And so at some point it could just fix things. But what I'm suspecting is that in some cases, either we screwed up or the users on purpose, some hackers kind of made the things screwed up, like something is failing or whatever. And then when it's trying to upgrade something is like, you know, a conflict. And then so because of that it fails and it goes back. So this is why we have a few things, I think, this is my theory, why we have a few instances that just keep failing on each upgrade, basically. But well, yeah. As we were discussing last week for anyone watching, we have a Gera 5 of that, but let's refresh. Woo-hoo! And so, yeah, we have a few instances that probably we are just going to try and deflect their logs from century UI because we cannot really act on them. In the API file, I also explained, discussed the fact that probably we need some way on the, for instance, the Evergreen Jenkins plugin to possibly display some kind of administrative monitor, admin monitor in the UI of Jenkins to show, okay, please reach out to us. We detected that you're not able to upgrade anymore. Please provide blah, A, B, and C in your logs so that we can help you and figure out, help you get back on track and help us, like for next time, not putting you in such roadblocks. Right. And it's not very hard to do in the end. Because in the end, when I was having a look at it, but I didn't have time to dig into it, we would probably need, and if you can at some point think about that, and I think for you, it would be very quick because you were the one working on most on the service side. We'll be providing a public endpoint without the authentication would be easier. Providing the latest URL, if you think it is acceptable with regard to security, which I think is the case. But yeah. Because then you think the Evergreen Jenkins plugin would be pretty trivial to ping that URL and get it back and compared to your local version. Yeah, I think that would be doable. For that, if that's something you want to go forward and do, if you file a request just for me to do the service side, I'd be happy to do that. Yeah. I found a G-Ride site that bigger epic indeed. Great. Okay, so we've got this fixed. We've got some updates rolling through. I don't really have much. So the way that my time works out is I probably have an hour or two on Mondays and Thursdays when I'm bouncing along in the bus to work on small things. So things like the Evergreen upload function not working, that's an easy task for me to take on. If there are other tasks like that, that would be helpful. Okay, I will do that. And I would say for the regular upload it's even more special. This is one of the cases where I basically cannot do anything or I would need to understand better the infra and more importantly, I have no access. So yeah. Okay, right. Time to stop the reporting and time to say bye-bye. Have a nice day, Tyler, because it's starting for you. It's kind of ending for me. 8am for you, 5pm for me. Oh yeah. Thank you and see you next time. Bye everyone.