 All right, welcome to the Jenkins Essentials Open Planning Meeting. Looks like we've got only a few of us here today, because Google Hangouts has been especially difficult lately. So I think we can make this quick. And since we just had a discussion with Carlos and Jesse around some of the AWS stuff, maybe you could give us an overview of how the AWS auto configuration is going from the start. Yes, so since last week. Will you pull your mic a little bit further away from your mouth? Maybe here. Is this better? Yeah, you can pull it down a little bit. But yeah, that's better. I guess that's better right now, OK. And I just need to try to stop shotting. So since last week, I did an experimentation using ECS. And basically, it seemed a bit too complex for the purpose of it being also having something working and moving forward on the AWS story. Basically, it needs a lot of setup on the AWS side than the ECS configuration. So I ended up switching back to trying to configure it using the more historical EC2 cloud plugin, which then moves a bit more swiftly. So right now, I have a lot of moving parts, which I'm starting to reconcile together, basically, because I have played on a separate master with the F3 artifact manager plugin. So for those not really aware about what it is, basically, a historical lead Jenkins has been storing artifacts. And for instance, Pap, the pipeline status, everything is sent back and stored on the master itself. So what that plugin does is that he's been leveraging the JEP202, I think, which basically rewires the internals of Jenkins to be allowing people to plug a separate storage engine. And in that very case, there's already a plugin implemented. So that lets you store those blobs, basically, in F3. So that's why I play with it, and it works out of the box nicely, so that's very cool. So then I played and make sure I were able to configure the EC2 part with the EC2 cloud. So now I'm able to do that. I'm still getting right the configuration as code, because unfortunately, I wasn't lucky enough to find an existing example for that, so I have to write it myself. But basically, yeah, writing the corresponding YAML to auto-configure that plugin. By the way, about that, I will progressively be more and more annoyed or blocked by the fact that we didn't implement yet. I sent a comment, I think, earlier to Monday on some ticket, the ticket about taking in account the flavor of a given instance so that we can serve different plugins, depending on that. So soon, we're already kind of hard coding those things. So yes, that's the one. And so I've described kind of the only moving parts. Now the only thing through the separate POCs I need to be doing before really finishing the Cloud Formation template I've also started is to play now to remove the AWS access key and access key IDs I've been using for my POC and to switch to an IAM role to avoid having to pass around basically secrets, which is going to be a much more secure solution out of the box for those not very aware about that. Basically, it's a way to express from AWS the set of permissions a given thing would have. Like, OK, I have access to reading F3. I have the permission to spawn EC2 instances and something like this. And when starting up EC2 instances, instance, for example, I can, through the CLR or API, I say, OK, that instance has that profile. So that means that the thing that is going to be running in that will be allowed to do what the policy is saying. So that's why I'm going to be switching to in the next hours. And so then, to wrap it up, I will start writing as soon as it's working, writing a JEP to explain the AWS setup and then the upper layer, explaining there, by the way, a ticket for that, how we are going to generically configure auto-configure things on different depending on the flavor. Yeah. I hope it was clear. From the AWS standpoint, we'll be incorporating artifacts on S3 and then agents on EC2. Is there anything else on your radar for that right now? I'm not sure what you mean. I guess no. Any other integration with AWS services? No, no, not that I know of. I don't think I forgot anything obvious, but in the future, for the record, indeed, maybe, but it's not existing yet, we probably have some refactoring as things are going out for the logging part, but that's it's really much work in progress for now, so not yet. Any other questions that work in progress for you or somebody else? I think for at least on people working on more lower layers in the architecture, Carlos, Oleg, and Jesse, like sending things through Fluyendy and so on, I guess. But it's more upstream or downstream, depending on how you look at it. Before I moved to talking with Mandy, I had some questions about this task, and showed me late last week, I think. My understanding of this is that because we are putting logs in a different place that the SSE gateway wasn't respecting that, is that something that you just discovered in some of your manual testing, or did that come up in an automated fashion in any way? No, it came to be discovered when for your PR, you added more plugins, I think, than we had, so transitively or directly, I'm not sure we added the SSE gateway plugin, which we didn't before, and that plugin, for historical reason, has always been assuming that hard-codedly, we Jenkins root slash logs where the logs should be put. And as it's not true anymore, we have to adapt it, and we didn't, so wrapping up what was your question, I think. We didn't discover that before, because we never installed the SSE gateway plugin before. Is it something, I don't actually remember, were tests failing because of this? This was the test checking that we don't have any logs under Jenkins home, because it should be under Jenkins R. Okay. Because we segregatedly, we opinionatedly segregated data on the instance that Jenkins slash var and Jenkins slash home, more variable things and statically static things. Okay, that makes sense. I just couldn't remember clearly whether this was something that we had automatically discovered, and in that case, we'll automatically discover if there's a regression or another plugin which has a similar problem. It's kind of... Originally, it's kind of downstream to the GF302, I think, about the snapshotting system, where to make that snapshotting system more straightforward and avoid having a bunch of things to be shared in the gitignore file, we ended up trying to say, okay, I'm going to separate the things I don't want to ever store in that git workspace. And so, yeah, that's how we ended up exchanging the cores so that the logs could be separated and many things, by the way, not only that. Cool, thanks, Patiste. So Mandy, it sounds like Patiste has shared some tickets with you. I think the error logging to Century was probably one of the big ones that I was thinking about. Are there any things from last week that you want to talk about, or do you have questions about these two tasks for this week? Nothing in particular for this week. I'm going through pretty much all of our test cases, which is helping me learn the code and standardizing a lot of them so that we don't accidentally pass when it should fail. And then I am... I'm going to run some crappy tests. And then I'm starting to learn about Century, and I'll be taking on those tickets very soon. That, just from my historical standpoint, the reason that there's the assert and expect stuff and then there's crappy tests as well, when we first started working on this, I was fairly fresh to node. And I started using Jest based on a recommendation from a friend of mine who has a strong node background. And I didn't know for probably the first two or three weeks that just had a lot more useful features than I was aware of. So you could almost probably just go from a reverse the chronological order that the tests were written in, forward in time to look at the bad tests. And I cook it all. I'm definitely seeing the progression and style and complexity, so that's all good. It's a learning experience. But so one of the things that I had resisted for a while and I'm definitely open to refactoring some of this is the way that some of those acceptance tests started to play out is, it started to feel like I needed almost a sort of dummy client for some of these APIs to model some of the interactions. And that's where the helpers JS, let me just pull that up real quick. The helpers stuff and the acceptance directories started to come in. There's, I think there's still worthwhile refactoring work that we can do to make an actual pretend client that's gonna make sort of an authenticate, a register call, post versions and that sort of thing. I don't know if there's tools that go along nicely with JEST or whether JEST does this nicely around defining pictures, but that's another area that I sort of mentally punted in my head. Right now there's a lot of acceptance tests in particular that are defining request bodies and expected request responses that could easily just be defined in fixtures and reused in a lot of different tests. But I didn't bother looking into that when I first started writing them. That's actually one of the things I'm sort of looking at now while I'm going through all the tests where, because a lot of our tests don't actually validate the response beyond a few bare minimum things. And to have full testing, what we have right now doesn't cover that. Yep. That makes me think that- That's probably the way improvement. Hold that mic further away from your mouth, dude. That's great because you're going to probably raise the coverage, by the way, you will likely want to bump the values in the package.json, I think two more if you're able to raise that threshold. Yeah. Yeah, we've basically been, whenever I notice that we've raised the threshold to a certain amount, I set that as the new minimum bar. I want us to be getting better, not worse. So yeah, this is in services package.json, there's a threshold in there and then in the distribution client package.json, there's thresholds defined for just as well. If you're improving test coverage, please bump the minimum so we don't get lazy. Anything else, Andy? Nothing else for me right now. What is it? Are there things that you're blocked on or you're going to need some time from Batista Rai? Not right now, no. I'll reach out if I feel like I'm running into blockers when they happen. Cool. So for me, the big thing that happened last week, or I guess it's Tuesday, so yeah, it was last week, is Batista helped me finally get that damn pull request 105 merge, which includes the update service properly. I'm actually really, really happy that Batista submitted, let me find the pull request because I was really, really happy to see this pull request. So Batista just ran this thing and generated, I think then make ingest update center. And so we pulled in some updates automatically for all of these plugins. And so for me, I know we're not using incrementals yet. We're using incrementals for configuration as code and the essentials plugin and a few other plugins, but just pulling in all of these from the update center automatically and being able to do that on a daily or a weekly cadence to me, that is a big milestone. Just being able to pull in those updates. Yeah, that's kind of already hinting on the GitHub's way we try to push forward. Yeah, definitely. So I got that update thing merged and as we discussed a little bit last week, I think in the pull request, I don't know if you had seen that, Mandy, but the client is checking for updates like every five seconds, which is not correct. The reason I'd added that was because we don't have, we don't have the command and control infrastructure in place right now. And to make the acceptance test work, I wanted the client just to check in very frequently so it would get updates. So I'd filed this ticket, the 512-72, let me just open it real quick. I had removed this code last in that pull request, but we'd basically removed it from the client I think almost a month ago now, but this long-lived connection for sending commands to the client to notify the client upon updates. The feathers, I'm basically gonna be starting to re-implement this now that we know a bit more, this I anticipate is gonna be a lot simpler, because feathers has support for, like feathers has events sort of built into the subsystem. So like whenever a record is created or updated or deleted or any of those verbs, anytime something is verbed in feathers and event is admitted and can be received. So when we create new update records or a new update level, we will be able to just automatically dispatch an event over a Socket.io or WebSocket channel to the client to check in. So this should be fairly straightforward. The biggest, I'm only taking this task on, this week, because I might be disappearing, but I also have a pretty heavy meeting load with some other CloudBees related work this week. So any of these other tickets, if anybody wants to take these from me, you're more than welcome to them, but this one I am anticipating getting done in the next couple of days. By the way, I was thinking the deployment one is, I'm probably going to have to implement in some hackish way or maybe not so hackish way because of the work I'm doing right now and I'm in the US because for now, I'm basically passing around the right things to standard Jenkins dash slash Jenkins images because that's, but when I will reach the point where I'm basically starting up a Jenkins slash every instance, then I'm going to start having issues. So I will switch back to using something like Evergreen.Jetkeys.io.Batman.net, which I've already started doing, but at some point it's likely I will need to redeploy the services regularly or something, so. Yeah, so I'm just going to make a note for this in case you get to it beforehand. I've done some, this was my bad. I had done the sort of initial scoping work of this, but had that in my head and in my yellow notebook, but I didn't put it in the ticket because I was anticipating to get to it sooner than I am going to get to it. So I talked with Olivier a bit about this and the challenge with the service, it's a little bit different than some of the services that we have implemented previously is that we need to provision through our Terraform infrastructure. We need to provision the PostgreSQL environment and then we need to define, I'm going to just make a bullet of this while I'm packing. Then we need to define the Kubernetes resources in the Jenkins infra-puppet repository because that's how the Jenkins project deploys our, we use Puppet to sort of manage the deployment of our Kubernetes resources. And then we define some way of implementing the database migrations. I had talked with a friend of mine who does a lot of work on Kubernetes and the way that our options are pretty much using an image container, which is not, he suggested that that was not the best idea because then if you need to run migrations, you have to basically restart your service and you may not want to do that for defining a separate migrations container that you can deploy just on the migrations. So the latter pattern that he described for me was that we would, basically from our repository, we would be creating effectively two services containers. We would have the backend services container that we are already creating and then we'd have a custom container that had an entry point to run the SQLized migrations and that we would only deploy, we would sort of recreate that container whenever we had a migration and then we would either submit a job or deploy it as a deployment in the Kubernetes cluster, managed in the same way with Puppet, but using that as sort of the runner for migrations. He's going to stop just when it's done and so on. Yeah, yeah. And I'm forgetting what I wanted to say. Okay, and there was some tickets loosely or maybe not that loosely related to that, which where you were saying that we need or so to squash the current different things that we have right now before we actually start. Yeah, with the migrations, I'd consider just because we don't have a production database right now, I'd considered basically dumping the schema that we have today into a single migration so that we sort of would seed and initialize the database with what we have today as opposed to running through all of these migrations. That's kind of an optimization. I don't think we need to do it, but like these migrations are kind of stupid because they happened as we evolve the database, but if you run a make clean locally, you basically start from scratch. Yeah, but we don't need to do that to go to production. But that's nice to have tested the migration way. Yeah, we don't need to do that. It's just, that's a pattern that I had learned at some previous companies to where running Rails migrations would get slower and slower as time went on. So what we would do is sort of like every quarter or every half year, we would just basically squash all of the migrations into one alter table statement, basically, and then we'd just run that on our deployments because we were running migrations with every deployment. We don't need to optimize for that. Yeah. But I think with these details, if you or Mandy find yourself bored and want to take that on in the next week or two, I think Olivier could definitely help point you in the right direction. I don't think it's going to be that challenging, but it's difficult to test because the way that we sort of test our Kubernetes resources is we stand up sort of our own Kubernetes cluster for testing, and then we provision those resources against that, and then we check them into the Puppet repository. I don't know if Olivier has a more convoluted test setup than I do, but that's how I test the Kubernetes resources because I don't have a Puppet master running around that I can use to drive the actual Puppet resources. Okay. By the way, I'm just checking that right now, but well, it's kind of a detail, but we actually don't push the Jenkins CI. I'm not sure we actually build it per se. Jenkins CI, infrastructure, evergreen dash backend. We don't push in the app.docker.com. Oh, really? Yeah, we seem to be only pushing the distribution part. Make publish, I think I had a look at that some time ago, and... I thought we had that. I thought too, but I think not. When I'm looking at the services that make, slash make file, I don't see any publish anywhere, so... I could have sworn I had set that up. Okay. Let me just file a ticket for me. I'll just create the repository so that we have a place that can be pushed to. At least it will make my hacking easier because I can just then, on some given random AC2 instance, you can run the DB and that container, you know, just to docker-comp or something and be done with it until this is more done. Okay. Yeah, I'll take care of that this weekend, or not this weekend, so I hit Tuesday today. If you wait until this weekend, you might not be available anymore. Monday was a lot longer than I anticipated. I'll put it that way. Okay, so... I think that should cover everything for the week and of course, please feel free to take tickets for me as needed, especially applies to you, Mandy. And if there's any questions, I'm bouncing in and out of meetings all week, so email, email, or, I mean, pinging on Gitter will also, that's asynchronous enough for me, but if we need to do a Hangout, just schedule something with me. Right. Good luck finding time. No. Any other topics or blockers or things we needed to discuss? At least? Nothing from you? Mandy, anything from you? Nothing from me. Great, I can see the finish line from milestone one from here. So I'm really excited to get these things finished up. Yeah, we finished 80%, so only 80% is left. I think I already did that joke some weeks ago. Yeah, I'm sure you did. We can go back to the archives to check. Exactly. See how original the test is. I will file a ticket to file this. Anyway, I'll see you all on Gitter. Bye-bye.