 Hello everyone. Welcome to the Jenkins Infrastructure Weekly SIG meeting. We are the 23 of May 2023. Today around the table, we have myself, Damien DuPortal, as usual, Hervé Le Meur, Marquette, Stéphane Merle, Bruno Vartan, and Hi Higo. Welcome. So, announcements. The weekly, as far as I understand, today's core weekly release has been delayed, but it's fixed and the job is running. Is that what you are saying? That is, yes. That was what Hervé was reinforcing based on a correction we had to make earlier today. It should be okay later today, so we'll keep watching the release and package jobs and then we will defer for the changelog to the documentation team. That should be you and eventually Alex. It should be okay. No infrastructure action. Is that okay? That's my assumption as well. That's what I understand. The infra is working. There's not a break we're seeing. The break was actually in the source code and that break in the source code is fixed. Cool. Happy to hear. So that means, Stéphane, you should be able to update Infrasci and weekly CI tomorrow. No rush on doing it today due to that delay. Do you have other announcements, folks? Okay, so now the announcement upcoming calendar next week. Next weekly will be 30 May 2023. That should be 2.407 if I'm not mistaken. 401. Oh no, 407 is weekly. You're correct. And we will have an LTS. I already forgot. 2.401.1. Nice. Would it be June or 31 May? May 31 May. Yeah, 31 May. Okay. Alex is the release lead. Is that correct? That is correct. Not my fault. Release lead. But that's just remind me, we finished the calendar and Mark, we have to talk about tagging weekly and permissions. Thank you. Yes. And I have to exit in 21 minutes. So Damien, we may have to put that relatively earlier on the list. Yeah, that will be the first to the mountain. We had last week to 16. We had a secret advisory. We already talked about that. And we don't have other announced on the mailing list. Next major events. None that I'm aware about. Is there any major event where you are aware folks? No. Okay. So let's get started with that item. So as part of work done on the SIG platform meeting, but that has an impact on the team, we changed the way the Jenkins controller, the official Docker images built. Before 10 days ago, we used to have a script on each run on master that run on our private trusted. The script was for the Linux image in charge of, okay, let's check the two last version of weekly line and two last version of LTS line and check on the Docker hub that each of the definition of that image, the different operating systems, the different CPUs, the different tags, all of them are published. If they are not, then it failed and they try to republish the image. And the paper that looks really good and that has worked somehow for the past years. The issue we see for the past year we saw was that sometimes when we introduce a new platform, a new operating system, a major operating system change, a new CPU platform, then the two past releases are overridden, rebuilt and changed, which changed the checksum of the end users. And that one started to be less and less acceptable. So we changed recently, now we need to create a tag. The tag is the Jenkins version you want to build with the Docker image. And once that tag is detected, every five minutes the job watch for new tags and if it detects the tags, it build only the image that comes from the tag version. So the advantage is that we only build the new version. We don't have to fear overridden or adding new platforms so we can deliver way more often. That create new questions that will be for the ASIG platform meeting later today. Do we need now for the Docker image to have LTS and weekly branches or master and LTS? We should, but that's the new question. Now we have the foundation. And for us, the infrastructure team, that means we need to set up the permission correctly. The request from Alex was should we had as maintainer of the official Docker image, the members of the release team. So they should be able to create the tags. That's the question. And maybe if the answer is no, that means we need to build an automation that will avoid this number to have this permission. The automation should be in charge of creating the tag when we have a core release. That could be a solution because the core release system has a token that has the permission to create the tag. So maybe we could avoid that. Until then, we have either to build that automation or decide if it's okay to have a few members to be maintainer. That could be an intermediates member such as Alex Brandes. My proposal is that we use that intermediates. We had a few selected members. Team Yakom is already able, Mark and I are maintainer. We have the permission. Eventually adding infrastructure team member. But I think the three of us are there. I propose that we only add Alex Brandes nominatively until we settle for that. Or we accept that every Jenkins release team member are also co-maintenors of the Docker image. That's a balance to find. So and tell me about the risk that you see there. I'm not sure. Yeah. So we've already got maintainers on the container image, but they're not necessarily release leads. And so the idea was should we add the release leads to the list of people who are allowed to maintain the controller container image. Exactly. The risk is in order to create a tag, that means you need to be granted permission that allows you to push codes, not only pull requests. So that means you have special, you need to be a writer. So we can eventually protect the master branch, but there is still a permission risk on that area compared to creating the tag automatically as part of the release code process. Got it. So, so the other alternative might be some in some future day, allow the core release process to push a tag to a repository that right now it cannot access right now. It has no permission to write to, whereas it does have permission to write to Jenkins core. Yes. So we should be able to grant it permission to write also in the document because somehow automating that part instead of instead of relying on the human would avoid a lot of mistakes. Okay. Now I guess even further we could consider converging the controllers container image into the Jenkins core image. But then the I guess the problem is that now locks out the container containers so that they would have to also be core maintainers and that's probably not healthy. Okay. Exactly. That was a bad thing. And that's also a second topic that I personally want to bring and push forward on the SIG platform meeting is stop using the same exact version between Jenkins core and the Jenkins image. We need a way to say that version of the Jenkins image as that version of Jenkins, but we need something like a suffix like the package builds because Jenkins core is a dependency of the image. It's not the image. Right. I see your point. And certainly other other package delivery, other other people who are doing these kinds of things are doing something similar. Right. The many of the container, the operating system container images have a dash suffix that they use to say this is version such and such, but it's this iteration of it. Thanks. So why does it concern the infrastructure is because so if it's okay, I will open an additional desk describing here. Because my point here is that we have an action item long view, which is automating the creation of the image during part of the core release. I propose that we as a team start on the upcoming milestone automation of that part as infrastructure team because that's part of the release scripts and processes. Because we are the person in charge of the credential running on release CI. And so that the pipeline and script and stuff should be able to create the tag on the Jenkins CI Docker. Proposal. Let me proposal to add Alex. To make the upcoming LTS release safe. Proposal to automates the process in the future to avoid these permissions. Issue to open. Is there any volunteer to work on that automation part? No. Okay. So I will open the issue if anyone want to read the issue and change their mind, everyone is welcome to help us on that topic. Let's start now with the task that we were able to finish last week unless anyone has a comment objection question on the topic. We are about to close. One, two, three. Okay. Now, what are the tasks that we were able to close during the past milestone? Thanks, Stefan, for the digital ocean leftover. We had a leftover persistent volume and since we don't have garbage collector on digital ocean, then we have that one that we don't have often. So that's okay. Confirmed that the Azure budget should be under the 10K as forecasted for this month. One of the main effort we did was on the virtual machine agent from CI, the change of instances that allowed us to drop creating an SSD for each machine. And the fact that we use new instance type with the same capacity, the CPU are way more powerful and we enable spots that decreased a factor of 10 price per hour leads us to a drastic decrease of the costs. So that's really interesting. Now, we need to check with the developer of the acceptance test harness and the developers of the core if they saw a lot of builds that slowed them down. That could be caused by the spot instances. We checked that we have a retry mechanism thanks to Bazil that used a retry mechanism that detects an agent failure. But I saw failure during the past week on CI, that might or might not be related to that. So better to ask the contributor themselves. What would happen? That means we might have to define two kinds of instances for this job, the high memory instances, some that will be the default with spots and some that will be high meme critical where explicitly on the pipeline you call the label critical. But the people using that will require a close review if it's really needed. Also, we have some work that we'll discuss a bit later started by Irvin and Stefan to start using virtual machine on Digital Ocean. That could be the solution. We might say we only use low cost spot on Azure and on Digital Ocean we use higher cost instances but that won't fail during the build. That could be also a way to extend and spread the load. But right now the cost decreased so that the issue is closed. It's closed, sorry. Launchable RVE, can you give us a quick report? So with the help of Brazil, Launchable is now available and properly installed on the other image we are using on Windows virtual machine agents or CI tankings.io. So we don't need to use Python.exe to call this tool. And Brazil tuned up the pipeline where it was used. Where the pipeline library we put in place before was used in Core, ATH and several other repositories. So we don't have any bits around that anymore. It's called directly in the shell command. Cool. So the initial phase of discovery is now finished. The secondary phase of setting up the tooling has been done and now we finished the third phase of optimization and it's usable. And as far as I can tell there is a lot of hidden work by URV about automation of the updates of Launchable to keep track of Launchable on all of our assets in a synchronous time so that should allow us to keep up with the new changes provided by Launchable. So nice job. So now let's move on the work in progress. We have a lot of long-running tasks that spawn across multiple milestones. Let me check from the notes. First, add Azure IRM 64 virtual machines for Infrasci. Stefan, what is the status related to Infrasci? I think I just forgot to close this one because I did manage to add the condition on the update CLI to make sure that the Azure version of the gallery was available before launching the update. So I did finish this morning so I think I just forgot to close the issue. Okay. I will add one last condition before closing. Since you remove any usage of AWS and Infrasci, you have to check that we don't have any more, any AWS credentials or Infrasci. Just to be sure we don't need and that means also removing the EC2 plugin and associated these plugins from the controller. Okay. So the removal of the EC2 plugin really means we won't use EC2 virtual machines and if we need them, we will put it back. Exactly. But only Infrasci there. Is that okay for you? Yes. Okay. So I propose we keep that issue on the milestone and we will close it once we won't have any reference of credential or plugin in Infrasci itself. Is that okay for you? Perfect. Thank you. Thanks. I checked on the billing on AWS and we saw a difference. It's minor compared to what the bomb builds are generating, but still it's visible. Good impact on the AWS billing. Oh, by the way, the launchable work done by Basil allowed him to contribute on the bomb a few weeks ago. So it also had indirect positive impact on the billing on AWS. Next topic, upgrade to Kubernetes 1.25. So now our system use kubectl 1.25. Our cluster are still 1.24. That update is required on AKS to be able to use Ubuntu 2204 node pools. Right now, currently checking on the deprecated directive that we should update on our end charts. And once that will be done, that will be changelog reading for each of the providers. I'm proposing the following plans folks. I'm thinking that issue. As I said last week, I will want to start updating Digital Ocean as soon as possible. Then because it's used by CI Jenkins I.O. for plugins, usually the updates are going really most on Digital Ocean because we don't have a lot of complex things there. Then I will want to continue on AWS, which is a bit more sensitive, but still only used by CI Jenkins I.O. So the scope will be only impact on CI Jenkins I.O. if it fails. Then we'll have to work on Azure. But since we have the migration of pod public gates to public gates, that one might be locked before updating to the new Kubernetes version. So I propose that for the upcoming milestone, I only target Digital Ocean for sure and eventually AWS and we will do a status report next week. Is that okay for all of you? Or did I miss something? Or do you have other proposal, ideas, objections? Let's start with the OK. Eventually, I know AKS is the most important one, but that's also the one that required most care. So I would prefer finishing the migration to have less cluster to upgrade. And if I'm not mistaken, there's two in Digital Ocean. Yes, you're correct. Public gates migration for AKS. So the migration on public gates so RVA, you did it over to me. Can you report on what you did during the past milestone before the launch? Since last week. Your mic is off. Since last week, I tested the Redis connection from the new cluster. And I started the creation of PostgreSQL server in another place. So we want to be affected by the overlapping issue. Overlapped the IP issue. And I flipped that the text on the Redis. You migrated some of the services already. Wiki. Since last Tuesday. Migrated Wiki. Jenkins. It's been more than one week. Sure. Yeah, we migrated them on Friday. True that, my bad. It was mostly a preparation work. Okay. Preparation work. So you hand it over because you will have a short week. You have some days off for this long weekend. So that's the reason why I'm taking over on this one. One of the main elements you identified earlier today and share with me RV, I wasn't aware. We have one stateless application yet. The incremental publisher. That one should be easy to migrate. Is that okay? Yes. Stateless. Easy to migrate. To do. And then on work. The goal is migrating key cloak. To test match the database. Just a word about the postgreSQL database. It wasn't that easy. We needed to create a new instance. And we realized that this manage instance does not support IPv6 network. So we had to create a specific network. And now I'm, I'm fighting against network peerings and accesses. I've been created with success with the new network. And I'm working on accessing from the new cluster. And also from our management system. So the database are automatically created. Right now I got time out on both. So I need to find which security groups are rooting appearing. I failed 12. And the last mile is that I discovered that the virtual network peering created by Terraform are incomplete. So that was already the case, every when you created the network a few months ago. I don't know if you remember, you created a peering from private to public. Terraform and Azure API reports that everything is okay. But when we go on the Azure UI, it says it's incomplete. And we are missing the symmetric peering. It looks like it's a recent change. Less than one year ago on the way we created. I tried some peering manually. And when you create manual appearing from the UI, it creates both peering now, which wasn't the case. So I'm going to have to work on that part, creating both symmetrical, but need documentation reading before that. So you did right. It's just, it looks like it changed in the way Azure maintenance. So right now I cannot access the public cluster from private cluster. And I cannot access the new database cluster from the one. So I guess public to database is not working despite the, for the same reason. So that should be the next priority task for me. Next issue, pick a usage cost for the protocol releases resource group. So that issue, I plan to close it. I will wait until end of May and see the status of the billing. We don't know why beginning of April, the cost on that system increased a lot. And now it has decreased since one week suddenly. Multiple theory, multiple analysis didn't issue. But the answer is, we don't know why. Is this because of the DNS decreased workloads thanks to the work that we did on the data dog agent on the clusters? Is it because we are migrating to public gates? Is it something else? My proposal is to wait end of the month. In fact, it's not the end of the month that counts. The most important is we will have to check the state of the billing and the usage and the error rate. One who will have migrated to the new Kubernetes 1.25 because it fixes issue on the CSI Azure file provider, which is the core here. And one who will have migrated everything from the overlapped network because that could be network issues. So wait for it. Public gates. The good news is that the cost decreased drastically allowing us to go back under the 10K per month threshold on Azure. But we have to keep this issue open and check for it weekly. Don't hesitate if you have any questions on that topic to ask on the issue or on the channel. Migration of trusted CI Jenkins.io. So same we had on over because you had a long weekend last week. So the handover went really fine. The work you did was working. We had a virtual machine that were connected to Poupet. Poupet management is finished. There were bootstrap issues, but that wasn't related to your work. These issues existed on the Poupet profile since four years now. So it's fixed. Because we don't initialize a new controller every day with Poupet. For the free VMs. Had to fix up some security groups. So everything was locked on. So everything was locked on. Specifically some elements. But now we can have the SSH. So the next steps are the data migration from AWS. So we have two data, the Jenkins that will be the quickest, but we have all the cached data on the agent for the update center. And the second one is there are still some security groups to apply to the permanent agents. Groups to fine tune for permanent agent. The reason is I forgot about this one and everything is done on the controller side, but the permanent agent is on another subnet. So the fine tune security groups on the controller subnet are applied to the bounds virtual machine, the controller virtual machine, but not the permanent agent. So we just need to duplicate and to apply on the correct subnet. And adapt because the network flows might be a bit different. So everything is going really fine. Nice job. Stefan, are you okay to take back on the security group or to help me on that topic on the upcoming milestones? Would it be okay for you? Yes, of course. So I propose with secondary and we invert the roles for the second task. Is that okay for you? Okay. Any question? No. Okay. Use digital ocean virtual machine and agent instead of container agent for CI we mentioned it earlier. Erwe Stefan, what's the status on this one? For my part, I did manage to build an image on digital ocean through Packer. It's not full automatic, but it's built by Packer and I hand over to Erwe to try and play with the plugin digital ocean from a controller to spawn VM with that image. We can only have Intel Ubuntu 2204, there is no ARM in digital ocean. Okay. I didn't have time to work on that yet. No problem. Windows in the but at least normal and I mean VM. Erwe, I believe you have a short milestone. Is it something you can work on or should we defer that issue two in two weeks? I think it's better to I won't have time. I don't think I will. Okay. No problem. Install and configure Datadoc plugin on CI Jenkins. Erwe, can you report on this one? I still have I didn't have time to work on the communication issue in Datadoc which is running on the host and Jenkins controller which is running inside the Docker container. Okay. CI.G container host Datadoc agent, okay. Do you need help or do you want to work alone on this one? Yes, it's like yeah. I'd like to. Okay. You are still there tomorrow but Thursday you are off, is that correct? No, I'm off Friday. Oh, even better, okay. So is it okay if we keep that one on the new milestone and I will try to spend some time with you today that might be complicated but tomorrow is it okay for you? Up to Thursday to work on it? Yes. Okay. Okay. Thanks Erwe. Clean up and import and manage Datadoc monitoring in Terraform. What's the status of this one? I've let that open. I've got the two old monitors that are currently applicable because they were watching jobs that now are on Trusted. Okay. We would need to create or put in place a file away to retrieve info from Trusted jobs via public files or something like that. Okay. If I'm not mistaken, that means for that issue, for the scope of that issue, that means deleting the monitor now because they are manually managed and they are not used and they don't have any data. So they are good candidates to be deleted and I'm sure we have an issue. We might have already issue from Daniel but I'm not sure. Either create a new one that say okay, we need to monitor Trusted Does it sound good for you? Yes. And you say we still have these two old but all the other monitors except these two are now managed as code due to the work you did. Is that correct? Yes. Yes. That's good job because everything was done manually so that helps a lot and that will allow us to create monitor when needed. Demonstration the work you did yesterday during the I-Node we had an infra-CI in less than one hour we were able to fix the incident and have a monitor to have a reproduction in the future. So that's a demonstration that that work is really useful. Thanks. Did I miss something on that topic or do you want to add something else? No, that's good. Delete great issue stating the problem to solve. Okay. So can I let you finish this one? Yes. You can go for deleting and opening the issue that should be closable. CI Jenkins IOU new VM instance type the virtual machine is created it's waiting for Poupet port waiting for Poupet and security groups. I was waiting for finishing the Trusted before applying to the new CI Jenkins IOU instance. The main issue here is the VPN the private VPN should be able to allow access to the public network where the new instance is created and now I discover with the public gates creation that the peering are not working as expected so I was stuck on that part. I wasn't able to SSH to the instance or test my security groups. So that's the reason. So now that's why the public migration is most important because we need to fix the network migrate the clusters before continuing. So if it's okay I will move this for this milestone I will destroy the virtual machine because we are paying for it and we don't need so I will comment out Terraform will destroy the resources and then I will differ in two weeks before finishing the rest. Is that okay for all of you? Destroy the temp resources and then wait for gates Trusted CI task to be finished because that CI Jenkins IOU task will benefit from the others differ in two weeks. Artifact caching proxy is unreliable I propose that we will pair together on that one. So what I did since last week I was able to create manually tested manually on CI Jenkins IOU new inbound mode for virtual machines. So the goal for the Artifact caching proxy reliability is that in Azure it's still unreliable. On the occasion we don't have the issue on the bomb builds we have decreased on AWS so we still have issues but we don't have a workload that justify spending time on diagnosing. However on Azure we still have issues that are easily reproducible with acceptance test harnesses. And is that the network problem? That could be, that's still issue on Azure with the overlapped network. Still issue might or might not be the cause. So I tested on CI Jenkins IOU new kind of inbound agents that works very well. So the goal is to switch from SSH to inbound from the fmrol Azure virtual machine. So we can migrate the agent on a new subnet that has been already created on the new network with no issue. This agent will start and connect back to CI Jenkins IOU. We don't need a new network that's inbound. Exactly. Next step migrate fm agent of CI Jenkins IOU to new network. So the next step Stefan is you and I to update the init script you created on the agent definition. We need to open some content to start the inbound process with the work from team jacom. It's not cloud init. We need, that's details. We need to work on that area. You build it. So I've tested with the help of team. So the goal now is to share knowledge with you. So then we can see how we work. The goal is to migrate these agents. So that needs multiple tiny iterations. First moving to inbound then moving to the new network and then see if it's still okay. Okay. Good for you? Cool. So for the next milestone let's check the triage or new incoming issue if it's okay for you. Can you read my screen? Is that okay? I see on your face you are mocking me. So that means it's readable. Or by yourself glasses. Had the pod garbage collector to a Jenkins agent, Kubernetes clusters. That one for me I need to switch to a session where I'm sorry. Here we are. I don't think we should be able to work on this one. The goal is to create a Kubernetes Chrome job on our end chart. So we have a Chrome job run by Kubernetes as a pod that will take care of deleting the agents. I propose we defer this one for later. Is that okay for everyone? I'm removing the triage since we have covered at least the why. And no milestone for this one. Then what do we have? We have created an RM64 not pull on publicates to start using RM64 pods. Stefan. First do we agree that the goal is to start trying on a new publicates cluster to have workloads running on RM? The example of Javadoc is a good one because it only run engine X. So that should be a good candidate to work on the new one. Do you think you should be able to work on that topic on the upcoming milestone? I can try too. Do you think we can start before the end of the migration? Yes, no impact. Now that Javadoc has been migrated. Yes. The warning I gave you last week is that we had to wait for the migration service two weeks ago. But since then there are very migrated services that are stateless. So now you can start working on a new version. Is that okay for you or did I miss something? So I'm removing triage unless someone has an objection because we agreed it looks like no one object on that goal and the impact it could have. Can you assign myself please? Okay. And so Stefan, I can add it to the upcoming milestone. Is that okay? I will try. Thanks. Backup LDSK issue has marked down RVE. Yes, this one is more it's that important at all. It's about backing up issue in LDSK as marked down in a folder in this repository. So when we someone execute a search in repositories they can get more on the results. We are currently discussing a lot about multiple services in LDSK issues. But when we want to search, I don't know LITUS for example, I want to search LITUS in every repositories right now I'm gripping all previous meeting all previous discussion sort of like that. All logs which can be in the repositories. I think the issues as marked down in this repository will allow me and maybe others to gripe for name or for anything in issues too. Okay, that makes sense. Is it an issue if the repository where the content is stored will be something like Jenkins infraslash archive or datas or something to avoid too much event? The thing is that documentation is already taken and it's not documentation for me. It's raw data. So that's why that's a life cycle different between if it's okay with another repository you can proceed unless someone has an objection so I remove the triage because the goal is clear. As you said it's not important but if you want to work on it don't state. Do you want me to put it on a milestone or is it okay if we leave it as it? It's important for me to understand. Okay. What do we have? Agent experience lax the polish of Github action opened by Bazil. So Bazil is requesting to have docker the docker command and the docker engine inside the Kubernetes agent container that will run on CIG and Ginsai. The technologies such as firecracker and kata containers allows the underlying machine instead of running containers to run really specific virtual machine lightweight and start so you can run docker engine within. So nested docker engine without an issue. Technically firecracker come from AWS so that shouldn't be an issue on the paper at least to add it to an EKS cluster for digital ocean I don't know if we can change the container run time and on Azure we can use both and I looked at CIS box also in the past so that could be a great idea anyway that's not our priority so we'll add a message for Bazil. That's absolutely okay but given we have different Kubernetes container that could create a constraint on the technology and cloud provider we use but maybe we could have a node pool on a single element just for people who need docker that could be an alternative to the virtual machine that could help on the cost control as well a single virtual machine for a single build cost always a bit more than a pod. Maybe we can use digital ocean for that kind of matter too you know yes also for the cost only but here what is pointing Bazil is for the developer in terms of developer experience is that the fact that they need the agent to start really quickly and that depends on how much time the virtual machine takes to start connect to Jenkins and see. So that one I'm removing TriH because we have discussed this one I need to add a message to Bazil here to say that's nice idea I have a few links to points but right now we cannot work on that that's optimization and we have other priorities so we cannot we don't have the material time to work on this one. Anyway that's a good idea and if you are interested looking at Firecracker and Kata container are really interesting piece of technology for running docker in docker. So the peak of usage we have covered this one and a new one forgot my username okay so that one will be automatically on the milestone I will remove the TriH after one last thing do you have other issues that we have on the infra team sync next that you want to work on on the upcoming milestone one I want to bring because that might be helpful for the google summer of code so we have the Ubuntu 2204 upgrade campaign automatically because we are working with the updates of trusted CI and support Linux container when running on Windows virtual machines that one I want to add to the new milestone the reason is the following not only jamsnode and jcglick mentioned use cases on some plugins where they really want to test that use case for the pipeline plugin for instance when there is Jenkins agent running on a Windows machine that has Linux virtual machine with docker and they want to spin up a docker agent using the docker plugin or workflow plugin bruno also brought to me there is a google summer of code project where the goal is to automate the test to provide technical elements for some of the docker tutorials of the Jenkins documentation for instance let's get started with Jenkins download that file install docker run docker compose up go to localhost whatever do this do this do this and you have a working Jenkins instance that kind of tutorial the technical element provided as part of that documentation should be tested the idea is to test them on the infrastructure on CI Jenkins at least once a week so once a week you would have a repository with a job that spin up these elements and check that the docker compose and the image are working with a few requests so at any moment if there is an issue on the tutorial we can identify that could be a way to check if the latest LTS version that just been released haven't broken or plugin breaking this test etc and as part of that google summer of code one of the optional element will be test scenario with a user running on the windows machine windows 10 or 11 with docker desktop and WSL that's the most common case now which is the default and you want that Linux tutorial to work even from a windows host and that implies yeah a lot of elements that's easier on macOS even though macOS could be interesting to test so that case of a windows host with a Linux docker engine is another case to add to that issue I remember Hervé you said you were able manually to install docker desktop on windows server even though it's not officially supported so that could be an easy way since we have chocolaty maybe our windows packer templates could install docker desktop advantage is that we would benefit from the commands that allows developer to switch from windows container to Linux container so the idea is to start working on that topic and see if docker desktop can be installed for chocolaty on our templates and if it doesn't work that's a swap-in replacement for the container we already use instead of installing docker c you install docker desktop that's all so the goal is to try working on this one is that clear for everyone is there any question objection on this one nope okay and then I'm adding the ubuntu 22.04 campaign because we are back at it any question anything you want to add nope okay I'm stopping the screen sharing and I'm stopping the recording so for people watching this recording see you next week