 Welcome, both of you. I guess we might as well get going on this. A couple of things that I wanted to accomplish today, just because I've been a bit out of the loop with some events yesterday. Who was that? I guess that was Batisse's headset. Just making noise. So I just wanted to go over quickly, really just for my benefit, more than y'all since y'all were all around last week when I wasn't just where some things are. I wanted to catch up on where the Bill of Materials discussion was and then just towards the end, Batisse, I wanted to talk about that update level's pull request number 70, if we can get some time. So maybe Raul, do you want to go ahead and start? How's the ATH and PCT pipeline library stuff coming along? Yes, so the pipeline library is already able to run PCT-ATH without problems. All this has done an incredible job creating an essential test step which basically runs everything. And now I am implementing that on deep planning and having some issues because some implementations are not yet fully functional. For example, I need to update the custom one back here to be able to install plugins which is some of the plugins we want to run for Git. And I have also find that the Docker PCT image is not honoring the PCT hooks. So Blue Ocean, for example, cannot be run using the Docker image. That kind of things that we are going to find a lot of things as we go along trying to finish the implementations. That is my current status. Where does the Docker PCT image get published from? Is that something automatically being published right now? Yes, we have automatic built now. By the way, what is missing? Can you cut the work package out? I missed it. So I have created a ticket, it's maybe I am wrong. And if I am wrong, we can close this but you have to try to, oh, sorry. What is the check here? So I'm trying to put a Blue Ocean GitHub pipeline into a custom work package and it's unable to find the version because it's trying to get that from GitHub. That is a multimodal project. So that repo doesn't exist. Yeah, probably doing something wrong because your currently takes a version from a factory if you contribute to take the version and if you contribute to take the directory path or GitHub branch, then it will be built and use a locally generated version which should work in both cases. Okay, so. We can just take the file line after the meeting. Yeah, that's fine. I will be very happy to close this. Yeah, so if this is my fellow, which is more than probable, then there are some things that we need to fix for finish the Git flow. I mean the complete flow with custom work packages and all the things, which is there are some plugins like GitHub that don't pass the PCT at this moment. So we need to fix them. Simply grow that you're gonna plan to fix or you're gonna ask the maintainer and hope they have the time to fix. Probably I'm going to create a first running Git without them. And after that I'm going to go on plugin by plugin basis because for some of them I have already a very clear idea of what the fix is. Okay. And I will just see what on one by one. I will create follow up task for every plugin that has to be fixed to have a proper truck in there. That is one thing. And the other thing is, I would like to discuss regarding the ATH. I have created a ticket thanks a lot for all the comments there. It's the ticket is five of sorts. Where is the ticket? It's in 1625. And that is regarding how are we going to date things on the flow? And when I talk about things is like the ATH version we are going to use and that kind of things. And I think I fully agree with your last comment, Jesse, just I have put something there. And well, just take a look comment there and we can try to reach some consensus there. What was that ticket number again? 1625, that one. Yeah. I've been active. Yeah, there is, I think Jesse and I are never going to agree on some things, but it's normal. Okay. Are there any things that roll for you from Batista and I that we could help or you need us to work on with you? Surely, can you repeat that comment on this too? Are there any things from Batista and I that you need help with or that we could unblock? No, not at this moment. Okay. Oleg, what's going on in your world? Yeah, so one interesting update is essential steps. And so we have a utility method in pipeline library which allows surrounding custom work packages, PCT and then acceptance steps harness in a single flow. And I have already created a bunch of pull requests for two different repositories like Jenkins, Settler, or artifact manager as free plugin, not this one. Essential steps is already in the rate so you can just open a groupie. Yeah, what this thing does is actually it runs integration test in a single flow. So within the end rate with run PCT and run ETH created by Raul. And it consumes the essentials, a YAML as a single input. So you can define everything in a single file and just second I will provide an example. So one of our main points, for example, was steps that integration testing. And I have created a pull request for that using the essential steps flow. We have space in the links to the chat. So if you open files changes, it actually just takes the essential YAML takes, yeah, there are some credit make files. Yeah, but effectively it's just a single step for Jenkins file, which takes the YAML configuration in a standard format and then just does all integration tests in a pipeline. And yeah, you have everything for the motion and just second to the end. Maybe I'm misunderstanding here, but I don't see this as an incremental test. This says essentials test. Oh yeah, so it's a typo and that's why there is an intro ticket to get permission because currently it doesn't build my Jenkins file at all. I see. Yeah, so just a second I'll probably... Yeah, if you send that to me, I'll get that sorted. Okay, yeah, I've already created the tickets and I'll send it to you. So I've picked another link, it's a demo of Jenkins continuous integration flow. So if you scroll to the right, in addition to what we've had before, it also runs integration tests for a set of plugins. The set is TBD, but yeah, they think that you just have a YAML file and then for any repository or whatever plugin or component on core itself, you can just define this flow. Cool. So we will be working with Raul in order to productize this flow. Now there are still some glitches, for example, if you want to build a course with class loading, but yeah, generally it works. Can we have the ATH run in parallel with PCT here? No. So it's something we could do, but yeah, the problem with it, evolution doesn't display it in a fine way. Just be one more branch in the same parallel. Yeah, it's not the PC, we will limit it to the work that we can run ATH and run PCT steps. So the question I would rather have answered is, which is like, if we think about the testing pyramid, is PCT more important than ATH or the other way around? Which takes longer and which is going to give us better results faster? So in the world, ATH is also paralyzed, but the only spot check test run, which takes something like 20 minutes. If we were able to realize spot checks, the flow would be much more effective, but yeah, currently we can switch PCT and ATH, it's just a cold change. Why don't you have some opinion on how to run that? From the, you know, the faster, more readily applicable tests moving to the front of our pipelines and then getting, you know, going up that testing pyramid to our smaller suite of acceptance tests that take a long time, those should be last so that if there is going to be a failure, we don't spend so much time running, you know, a whole bunch of infrastructure just to run ATH tests if we could have found a failure earlier, for example. Well, I should say in the case of, yeah, in the case of core, I'm not sure what you're looking at here, I think this is core. I mean, in the case of core, this one of the slowest things is actually the functional tests in the core repo. So, I mean, either way, we're going to have, or we should have the fail fast mode, so as soon as you get some test failure. I don't actually care what order anything runs in. I just want to make sure that we're being efficient with our budget on Azure. Basically, PQ poison was PCT and ATH not that efficient. I think I did follow the core, that for the core, we don't run ATH in parallel with unit tests now. But yeah, for the rest, yeah, maybe we can do it optional. There's, as I'm just asking you, you all to be thoughtful about this, that running everything under the sun in parallel, if there's no fail fast, or if, like as- Yeah, right here, I know. Yeah. We're not made of money. Okay. Yeah, we may need to spend some money, you know, paying people to spend time making some of these tests faster. Right. Anyway, continued delivery requires more research that's from infrastructure. So, yes, it can be close, of course, for Jenkins, for Stetler, we will be consuming more resources around this thing. So, Oleg, I understand that, and I'm fully expecting that. It's, we're rapidly approaching $10,000 a month for just CI.JenkinsIO, and I don't believe for a second that we're running things efficiently. So, all I'm asking is when you and Raul and Jesse and Baptiste to a certain extent are going through this to be mindful of how, what test cases can run when to get us results, not just results faster for us, but results with the most efficient use of our computing resources. Yeah, stuff we expect to fail due to actual mistakes. Okay, noted. But this looks really cool. This was, I think, what you had showed me last week, Oleg, and I didn't understand the pipeline you were sending me, but now I do. Yeah, the main thing is that we do more integration and we do it in runtime. So, Jesse has created incremental traditional structure, custom work package that is already integrated with it, and yeah, we will be able to run more and more integration tests if needed. Yeah, by the way, I mean, Surefire has some features to do things like run recently failed tests sooner and stuff like that. I don't think it would work all that well in this case because we have so many different test suites with different infrastructures and so on, but I mean, we might need to spend some time developing or finding and integrating some tools that try to predict the most likely points of failure based on which files are changed in the particular pull request. And yeah, as Pepty says, statistics about things that failed in the past Yes, but as it's right now, it still wouldn't fail early because it's not fail fast as of today, right? This is, we're getting off the topic for what I want to do in this meeting. We are solutionizing the cost saving part already. All I'm asking is to think about it and we can come up with ways to implement solutions in a not meeting. Yeah, just for the record, I have some concerns about the fail fasting that we can talk later. Okay, let's talk about that after this meeting and in the Gitter channel. Okay, no, so let's move forward. Okay, so Jesse, I saw some messages about the get plug-in changes. It looks like we're about ready to wrap up the incrementals work. Is that correct? I think so, yeah, if you click the update button. Thanks a lot, should just work. Are there any other things from infrastructure that we need other than just a plug-ins update on CI, Jenkins, IE? There's that PR on the permissions repo, I guess. It's loosely related to infrabitits. One from Jesse, updating everything in the permissions repo, which is how blocking some things right now. Yeah, last stay track, that was still awaiting Daniel. So, Daniel commented already, yeah. Okay. I was wondering if I would file a separate PR which would conflict yours, but at least to get in some things like essentials, metrics, and the things very essential. But yeah. Well, follow up with that. And I also am drafting a blog post about the infrastructure side of this. Cool. I'll try to get that filed soon. Which, I mean, incrementals to me is a really big step forward. So what's the next big thing you're doing, Jesse? I'm actually nothing, I'm supposed to be working on other stuff. Oh no. Okay, that's a bummer. We'll miss you. And all your good code. Well, the basics are pretty much working. I mean, I think the interesting things at this point are going to be around like the stuff Raoul was talking about and figuring out exactly what Evergreen is publishing when and how we validate those things for tests and so on. So that's sort of higher level work. What is? Sorry, what? Cut off, I guess, or on your motion. Your turn, buddy. Your turn, buddy. Okay. So last week, I almost, I would say, finalized the service side of the telemetry, error telemetry thing. And the JEP, I would say, is now ready. I'm going to file the PR because so far outside people were looking at what I'm saying. Yeah. So this one is not yet filed as PR because it was sent to the developers' mailing list for feedback. And now I've got feedback from you, Taylor, already. So we discussed quite a few things which I think I addressed in every, I addressed all of them. The only thing I didn't put in with regard to our discussion is the diagram because I think we agreed on the fact that now the HTTP verbs and the HTTP status codes I added in the proposal should be enough to understand and everything because this is going to be then just a push of the log and then maybe being rejected by something like two large data and so on. So I've been spending the time finalizing that JEP and already fixing the prototype code in Evergreen so to adapt and to do that, which is not wrapped up totally yet because I need also to not use the database anymore as it was the fact currently. And I need to adapt now to, for instance, reject data if it's more than one megabyte as we agreed on for others who might have read that we wrote that I would reject things when they would be above like 10K characters, I think. And now we said, okay, we are going to define some maximum but very, very high just to avoid attacks basically and so on and to break everything in the backend if we push something like one terabyte or one gigabyte log, but that's mostly it. And I hope to be able to really wrap up that one soonish and more than removing the ish to switch to something else. And I think we need to at some point also meet to discuss which kind of the point today or anyway, but what is left to really form my own one to have something usable, I would say. You had discussed some of the fluent D work with Olivier. Yeah. One of the big questions that I had had about that and I don't remember if we discussed this last week is how developers like Jesse, for example, would get access to those error logs if we were passing them into fluent D. Have you given much thought to that? Yes and no, we discussed that but we didn't come to a solution yet because it seemed like a bit too early because for now we mostly wanted to have an understanding of why I was wanting from the Evergreen project side have an understanding on how what was in place from Kubernetes infrastructure in the Jenkins infrastructure cluster, what was existing with regard to a raw logging. So right now I think indeed, if I remember correctly, there's by the way, a meeting notes merged and added to the Evergreen repo if you just want to open it, but it's pretty short. I'm not sure I was something about that there but basically if I remember there is only Tyler, so Tyler, the guy on the left for other people, Jenkins one and Olivier who have access to the accounts there. And so yes, so no, we didn't discuss some way to provide access to people outside. I think we somehow said that it should come later to try and provide maybe some dedicated app or dedicated access or something to the Log Analytics, Azure Log Analytics UI or maybe some front end to protect with regard to privacy or something for that kind of, I think we have an epic talking about metrics and telemetry in general. Baptiste, what I would like you to think about this week is how the people on this call, so Jesse, Oleg and Raul for example, in the least effort way possible, we'll get access to these logs because we're not going to build an application just to pull logs back out. At least right now we have too many other things to do but Jesse and Oleg for example, need to have access to exceptions as we pull these through. The same with Carlos and some other people. And so we have to find some easy way to get them out in a way that's suitable. Yeah, I just wrote it down but I guess you will have to, as a PM, as a product manager for Essentials, have to express what you want from the privacy perspective should we just open it to some people we know and trust or should it be openable to more people and so for instance, should those people be allowed and it would be okay that they see the existing Jenkins infrastructure logs, some of those, or should they really see only exclusively Essentials one and so on? Only Essentials. Okay, right. And we have, so we have just for your understanding, we have the LDAP behind the scenes in our project infrastructure and we also have the ability to define groups. We don't take very strong advantage of either of those things right now or we can put logs somewhere else for the time being. I don't have a strong opinion. Right. To me, if we are not able to provide this error telemetry to Oleg, Jesse and Carlos, then it's almost as if we wouldn't have needed to record them at all. Like if only you and I have them, we don't sort of achieve what we're trying to achieve with the visibility for developers. I can send them by email. A weekly report. Okay. Noted. So is that about it? Yes. And so we'll have to discuss what's, I have some ideas, but what you think should be next and I would like to be able to, I think we should meet, but I think you were planning to discuss that to see how I can help on the 70th European. Right. So we might as well switch over to that. So most of the work that I was doing last week was not Jenkins Essentials, unfortunately. I was standing and looking pretty at a Cloud V's booth for Microsoft build. So in between that, I did get a lot of things what I thought were wrapped up. So this pull request has sort of gotten a little unwieldy due to my own time. But there is the update services pushing out update levels. The client is storing it. The client is also sending it with its checks for updates. And so it's only updating if there are updates available, which with one hard coded set of updates, that means after the first run of the client, it's done. But Tiste, what I really need your help on is I've been on sort of low bandwidth connections most of the week last week. And I haven't been able to really test the, run these acceptance tests very well. I've had to rely a little bit on Jenkins for those. When I run things manually, things work when they run in CI or if I run, you know, make check locally, it doesn't seem like all the updates are getting pulled down. The first thing that would be really helpful to make this easier is if for the integration tests that we're recording the, or saving the logs that are coming off of the Docker container. So if there's a failure that that log gets written out, because I actually, I don't have a strong understanding of what's actually failing here. That would be really helpful for me. I did try your PR this morning and I see some things failing. So indeed we should need to think at least to try edge between what should be working and what shouldn't be or what is not supposed to be working. Because for instance, this morning I was seeing the client not able to connect because to instance, the server would start just after it, just a bit too late and thought the client would never try to reconnect and so everything would be failing. So yeah, I'm not sure what I got to be fixing or you know, or this is just, this is supposed to work or maybe for a race condition reason on your machine is working and so on. So that's why I wanted to think. So I think the ordering is probably, it's correct, I'm doing air quotes but you can't see because I'm sharing my screen. It's correct but it's not coming up fast enough that's certainly a problem. I expect at this point that these tests should be working. Yep. Because I've manually validated that running the server and the client. I do know that some tests are failing because of an issue I reported yesterday and should have been fixed in the meantime. So there was something totally failed failing on the config as code plugin, which was yeah. So it's been fixed in the meantime. And also I've checked but I think in your case it's already not the case anymore. We shouldn't need to build anything anymore because everything is released or something. And soonish by the way, we should be able so to use as soon as the PR we were talking earlier about the one from JC, we should be able to just use the incremental versions releases for metrics for essentials for whatever plugin we want to use and I guess it would be a slightly simpler. So why don't I, so we can work effectively on this. Why don't I target this branch of mine to a new topic branch on this repository and then we can open up pull request and we can sort of triage and deal with the little issues with this pull request separately. And we can. I'm not sure. No, I'm sorry. I'm not sure I grok everything you meant. So instead of targeting master right now, target a different branch on this repository. That we both can just, like if you update the configs code plugin version, for example, just sending a pull request to that rather than like, I'll close this pull request out and we'll just create a topic branch that's not master. So we can land some of these changes until that branch is passing. You mean the master branch because you think right now the master branch is not passing anymore? Yeah, I think it's the case. But I really, really was really in need to help and so on because indeed I wanted, you don't want to create too much conflict because you've already removed everything with regard to creating local versions of plugins. So I would create huge conflicts if I would do it in this way. But yeah. I'll chat with you more after the meeting. I think there's three or four things that we've talked about that need to be addressed and we'll just assign them to each other and then we can move forward. Let's just not bother everybody with your work. Yeah, little concerns. Once we have this pulled down or merged, these changes do provide the basics of the full flow. What I've noticed while I was developing this and Raul, depending on your time, I would really appreciate your help is we've got, this is a fairly complex system because we've got to run a couple containers that wants to orchestrate tests and everything like that. I'd like to start to figure out a better way that we can run our acceptance tests is right now the SH unit stuff that Baptiste originally created, it works okay. But I think there's a little bit more that we need to do reliability testing, like to say if the server is offline, for example, the issue that you ran into, Baptiste, that the client properly retries and reconnects once that server is offline. And defining those test cases is not something I feel very good about. And so Raul, if you have time either this week to talk about it or at some point in the future to where you could help us with that, at least if you have us pointed in the right direction, that would be really great. Absolutely, no problem with me. Oh my, look at the time, we've gone quite long. So the other thing that I know is sort of at the top of my list, if I, it's not literally at the top of my list, which, oh no, there it is. This test that Jesse asked me for, 51250, or 51250. There seems to be, and this is me segueing into the Bill of Materials discussion, the format of stuff that we need to ingest for the update service looks a lot like the update center. JSON, for example, I just need a list of URLs and artifact IDs, that sort of a thing. So I was gonna sketch out a file that would make sense for Jesse's tool to output. As far as I can tell, Jesse, that's the only thing that I'm blocking in the Bill of Materials discussion right now. Is that correct or incorrect? I guess, I'm not really sure who's driving that discussion at this point. We all are, Jesse, we all are. Yeah, I mean, this is pretty critical. We need to just have some sort of concrete example of how that metadata to be determined is fed into evergreen and consumed. Right. I guess, if I understand correctly, your PR70 is still basically just hard coding stuff as a stub. Correct, correct. Yeah, and that was this other ticket 5037, that was sort of, I might as well just make this depend on the other one. This is something that I would expect, the admin or an automated process can stand in for administrator. But basically would... Who's the, what admin? This would be at Jenkins Essentials admin. Why, what would you be posting to? The evergreen service. Like the evergreen service has to know about this change in the file. In some way. Yeah, I mean, I would expect that gesture to be a get change, you know? Well, so you're, I think you're, you're thinking about one part of this and I'm thinking about the other. If a change gets made in get, that's fine and that's expected, but at some point that data needs to get sent or evergreen.Jenkins.io has to be notified that there's a new version of that in order for it to generate the right information. So that's all this is. Okay, well, I guess if you replace the word admin with bot, then I would agree. The headless... Bear. Throw me a robot. Yeah, it's going to take your job. Okay, we are seven minutes late. We are bothering people. So is the number of attendees outside of us non-zero today? Yes, it is, but I'm okay bothering people. Wow. Especially you, Baptiste. Um, so for the bill of materials discussion, it sounds like maybe Jesse, you, Carlos, Oleg and I should have a follow-up discussion this week. And Raul. And Raul, okay. I hope, I hope Carlos is done with, with all his cons. Cube cons seem to have been a big one. And then there was a double year something. Yeah. So we need Jesse, Raul, Oleg and Carlos. Okay, are there any other items we, we should go through or we need to discuss before we let Baptiste get back to his coffee? We let you wake up fully. All right, thanks all for, for joining. I'm going to go ahead and stop the broadcast. And then we can just discuss some of these follow-up things in the, in the Gitter channel. And I'll see y'all later. Bye bye.