 Welcome, both of you. So this should be fairly straightforward. I know because of Memorial Day last week, we didn't get a chance to have a meeting. So I just wanted to catch up on where some things were from both of y'all's perspective, and I can share some of the stuff that I've got as well. Since we didn't get much of a chance to talk last week, how's the desktop happening going? Yes, so I just posted a status on the g-raw link just a few seconds ago if you can just scroll up. Yes, so that's the second one. Anyway, so I'm just going to summarize it. Basically, the PR works, but in a non-conditional way. So I have almost something that is working locally. I need to finalize the testing and so on because I started using the Docker Compose of write file, basically trying to make it conditional if the Docker socket would be mounted or not, so that we can test both cases. And I'm almost done with it, so I have a test that is going to be also enabled only when we are in that situation because, for instance, the Docker pool or run wouldn't work when the socket is not mounted. And what I wanted to say about that too, and then finalize the Jenkins file so that we use the simpler way and the make that we can run both combinatorials in CI constantly. So it's also triggered a lot of thoughts because maybe the Docker case is even more convoluted than the upcoming others like AWS because I needed to somehow expose the Docker socket to the user space and Jenkins is not a root, not running as root, which made it more complex than it would have been because I need to control something like this, which is like, you know, system level, which I think we won't have to do for automatically configuring things like AWS or Azure or digital ocean or whatever because we just will need somehow, I mean, some kind of key. I'm actually somewhat fine with us saying, you know, if you want this to work correctly, you know, some user has to have access to the Docker socket or something like that, you know, like we can push the pre-requirements to the user here. I mean, yes. So the alternative of what I've been doing because I've been using Indeed Socat to create a somewhat hackish solution by forwarding the unique socket to locally exposed STTV port. The alternative is, I'm not sure it's acceptable. That's why I ended up doing that. The alternative is to ask users to basically do a CH mode, A plus RW on slash var slash run slash docker.soc, which I'm not sure is going to be a very successful approach because maybe people will be afraid. So that's it. I'm not sure what you think about that, but... Is this written down and some of the pros and cons of this? Not per se as a JEP, but I've put some comments here and there in the code, but I definitely are so intended to probably start a JEP at some point. But I'm not sure yet. I don't know what you think, what you people think here. Either I start a JEP for describing the specific case of Docker or I wait a bit more and implement the AWS case and then try to write some generic JEP to try and explain how we would configure, automatically configure things like clouds. So there's, I guess, maybe both, I'm not sure. I definitely think the latter is more important, sort of describing how we other configure the cloud. I'm sorry, Tyler, I really have a hard time understanding what you're saying, so I'm not sure for others. The song is really weird. Sorry, Gary. For everybody or only for Tyler? For me as well. What's happening here? Can you hear me? Yes, we can. Tyler, can you say again what you just said, slightly slower maybe? I always suggest doing a JEP for the cloud configuration in general. Yes. And just for Docker in particular, I think if you write down in the pull request or in the JIRA some of the comparisons or some of the ways that we could configure this, just I want to make sure you understand what the pros and cons are from the user perspective. I don't think we need a JEP for Docker in particular. When I originally put this Docker configuration in for the milestone one, it was really to make sure that we had some sort of example. It's not the destination as far as I'm concerned. I thought it was going to be the easier first one to take. Actually, I thought the same, so I understand where you're coming from. But in the end, I ended up realizing it might be the worst case. The most complex one to set up. Because indeed of the case that you are, as we are using as a user, then it's not accessible and so on. It kind of made the whole thing more complex. My hesitation is about the automatically configure part. I mean, to me, it feels like what we should be providing is a package of essentials. And if your environment that you're installing in is an environment that's defined as there is a Docker host, but there is not a public named cloud service or something like that, then we need to be providing a package and some format, whether that's a compose file or what have you, that includes all of the configuration of agents as part of the package rather than necessarily as part of the plugin. Yeah, I see what you mean. So it would mean in our case that right now what I'm indeed leaning towards is to configuring things in the indeed somewhat runtime way. And you're saying we should better rather to publish in many images and we can do this kind of static way. Is that what you're saying? Yeah, I mean there might need to be some runtime code to handle the trigger handle the actual configuration perhaps. I'm just saying that the trigger for this in the actual source of information should be something that's static in the image, whoever you wanted to find that that we ship. Yeah, in my mind I was I think that makes a lot of sense. And it makes it a slightly more indeed Docker compliant. I mean in the in the meaning that Docker should not be. Yeah, we should not be trying to do too many things at runtime, but you know trying to test thing upstream and it would make sense that in that that that meaning. Yeah, for example, there's some code in Docker workflow plugin that does try to auto detect the host environment and so depending on detection of stuff it automatically switches the behavior of the with container step to use which which Docker option is it to use volumes from option to Docker instead of a plane volume mount to try to share volumes. But it's really tricky. It has to do a lot of magical stuff where it's like inspecting the proc file system and we've had like 13 pull requests to change that behavior to account for subtle changes in Docker, etc. It's just. Yeah, I like that. I like that. So I'm going to try and see how we can adjust the work or maybe just push what I have and then iterate again to kind of keep the history for future archaeology. But yes, thanks for the feedback. It's interesting. And for other things. So maybe you can jump to Olivier on that one, Tyler, because I believe Olivier has tight tight agenda. So you should be free to go as soon as possible. We're going to introduce the topic. Yeah, right. Sorry. So because we now are starting to have the telemetry logging working working. I mean, from the cold side, nothing is deployed yet. We will soon have a need to actually make it useful for developers. So we want to provide access to handful or selected developers to actually use that data. And so we don't have really anything yet about that. So we are trying to see how we could use the existing setup in the Jenkins project communities cluster in fraud to see how we could push logs there in some way and make only the logs from essentials accessible to plugin developers, some plugin developers, and not everything from the Jenkins infrastructure. And that's what Olivier has been having a look at a bit. So last week, I had a look at that. And so there is no easy way to separate the logs from the Jenkins infrastructure forever green. But what could be easily done is to deploy a new free MD next to your service and just send the logs to the Log Analytics. So it can be quite easy to deploy a new Log Analytics workspace and send the logs there. We need to agree about the schema that you want to send on Log Analytics. And so, yeah, I think there is no big issue to do that. But I wouldn't reuse what we already did for the Jenkins infrastructure project. And there is another thing that I have to forget to tell is that there is no easy way to integrate that with our current setup. So what we can do at the moment is we can use Microsoft Agents as guest user on the dashboard. So we can provide a read-only access to that dashboard. But that's so for me too. I want to step back from this approach in particular. Did you evaluate using any external service like Century or Bugsnab or anything else instead of trying to look at these as actual logs but instead look at these as errors? Because for now we wanted, from what I understood, you thought would be needed to try and integrate to the current setup and make things simple. But if you think it makes sense to try and basically send logs elsewhere, I can look into that. What I'm saying is it sounds like it's not as simple. Yes, exactly. So because initially our hope was that we could be able to send things with just a specific tag to the existing Azure Log Analytics. But it seems not indeed to be the case. So now indeed we have that information. We can indeed try looking for alternatives like sending it to existing SaaS solutions or I guess it's the... Is there any budget for that? I mean, what do you need to do to use the SaaS service or should we look for, what would be the approach to use the SaaS service? What has worked fairly successfully in the past is to find the service we like and then ask them for access. Because that's typically got in us. I mean, that's how we got Pedro Dudi and the other dog integrated. There's a lot of people that are willing to give us a very cheap free account for... Because in that case that a dog also provides a way to analyze logs and create dashboard based on your logs. So this is something that you can request? Yes, definitely. I mean, yeah. I'm not sure what you exactly mean, Tyler, if you mean that we would just create a free mem account and use it for testing. Or we would ask maybe then that dog to... If they are ready to sponsor the Jenkins project with some non-simple, a bit more... So we already used datadog, but what I was thinking more was when I first sketched out this telemetry service, it was using Sentry. Yeah, or so because... Sorry, go ahead. There are services which are much more specialized for handling exceptions and errors. So if we stop calling them logs for a second and call them errors, and you search for services that handle errors, there's a number of them which might be better suited to our needs that if it's not easy to integrate with our existing infrastructure, it would be worth spending an hour to evaluating a couple of them. Okay, I will do that then. And I remember for what is worth about... We already discussed a bit datadog with Olivier, and he was basically saying that there's nothing... I mean, it's similar to the other existing community solution. Datadog would require to create a kind of a new account because there's no way from the analysis to segregate data depending on who's logged in. At least I don't need to make a levier for more things. Sorry. Do you need a levier for more things? No, I don't think so because if this is now to test solutions, I just need to create a account for and so on. I don't think I need to bother Olivier. I need Olivier for the existing access to the infrastructure, but I don't think I really need to bother him if it means testing Datadog or testing Century more realistically. Yeah. Okay. A little bit, Olivier. Thank you for dropping by. Thank you, Olivier. What else do we have? But at least there is that kind of solution. So there was a quality bar ongoing. I guess you want to have a look at some point with your kind of PM cap, Tyler, to try and see if you think, I mean, there's nothing obvious missing that would be a really, really short showstopper at that stage, and I guess that would be because then we can further the analysis, like detailing on each P1 case, what are the risks associated to it, and so on. Is that something you need me to get to this week? Sorry. I'm not sure I guess. Is this a thing I should be looking at this week? Depends. I mean, I'm not blocked anyway. I can do everything else. It depends on your schedule and your level of activity. I swear, I guess at some point, we will want to have a look at that and say, yes, the milestone one is done for real and we have tested everything and thought about the risky aspects, but I guess I can do everything else in the meantime. Everything on the left can be done and I'm not blocked by this person. Jesse, do you have things that you've been working on related to Jenkins Essentials or are you just sort of floating around? No, nothing in that area. Okay, I know we discussed a little bit last week on this ingest YAML file. If I can find it. There we go. And I know we chatted a little bit about a tool that would generate this from some bill of materials and just as I think I mentioned this last week in the GitHub channel, I don't really need any tool to generate this right now. I can create a hard-coded version that we can work with just to move forward. Yeah, I mean, once we have stuff running and are actually actively pushing core and plug-in changes into this, then you'll become more urgent to have them running for us. Right. And writing a tool to do that is probably very little work. It's just there's no defined source of input for it yet. Okay. For my side of the piece, you may have seen some of my work on getting the essential over ingest YAML into the update source. I started re-rubbing, adding to JEP 307 to accommodate how the update levels are created inside of the update service on Thursday and Friday of last week. I anticipate that that's going to be what I'm going to be spending time working on this week. And then what I don't have a good sense of is whether I should prioritize this, the deployment of the backend services or if I should spend some time on these other tasks. I don't know how useful it is for us to have a production environment right now. That's a good question. Because for the upcoming talks I have, I was wondering if it would make sense to have some, you know, demovable environment like, you know, only starting the client to docker run something, which would automatically connect to the evergreen.jentkins.io or evergreen.beta if you want to make it clear. Your talk is next week, isn't it? Yes, at the actual talk, but I have beta talks before. That doesn't mind, because the other one I'm going to do locally or something, but it could make sense. I'm not sure. I'm saying it's the 13th. You're looking for the exact number of hours we have left. Yeah, I've got to start the countdown timer. Okay, I forgot. If you want to have something to where you can talk with people at EclipseCon, when a prototype is getting evergreen.jentkins.io deployed, so that one could actually just pull the image and at least pull the initial version of stuff that's hard coded. I think that would be cool. The reason why I was really definitive about my answer just a few seconds ago is that anyway I will to have a really full pre-alpha demo, I would say, what would be awesome is to have the service deployed and a way to update the update level so that I can actually trigger an update. But we are kind of still a bit far from that. Thank you very much. The work that I'm doing right now I think would allow you to run that through an update cycle locally. Exactly. So that should be done this week for sure. That would be great. Even with duct tape here and there, I mean, I can explain we are still on an early stage but that's very okay. But yeah, if I have something to demonstrate that would be great. I'm sure you've got a duct tape version of this for EclipseCon. Confirming in friends-driven development, as always. So outside of that, in theory, in a couple weeks after EclipseCon, Claibus has hired another developer that would be able to work on this with us. So finally we might have somebody else besides Baptiste and Jesse in this call. But that will be that's looking like she's going to start after EclipseCon, so you won't get any of that benefit for EclipseCon, Baptiste. That's okay. I'm playing with duct tape for a week, so that's okay. So outside of that, I don't know if there's anything else that we need to discuss. It sounds like we're not blocking each other necessarily right now, but maybe towards the end of the week we should talk about error tracking services, Baptiste. That's okay. Can you please repeat? Towards the end of the week, I'm going to jump in and hang out and talk about what you've learned about error tracking services. Yes. Okay. Well, if that's it for the day, then I'll see you all later. Bye-bye. See you.