 And here we are. Hello everyone, welcome to the weekly chain kits infrastructure meeting. So today we are six, yay. More people to the infra. I'm so happy. So we have Mark Wait, Basil Crow, Bruno Verrarten, Stefan Merle, Erwe Le Meur and Haydn Mendeu Portal. So let's get started with the announcement first. So the weekly has been released as usual. There is still the Docker image to be published. We'll talk about that later. It should be done in a few hours. Most of the checklist is complete. I see Mark that you're adding. What about LTS baseline? Can you, can I let you explain? Sure. Yeah. So the Tim Giacome has raised the question in the Jenkins developer list. The LTS baseline has not yet been selected for the, what would usually be the June LTS. It's being discussed on the developer list. I had proposed about a week ago that we'd actually intentionally, or maybe it was two weeks ago now that we intentionally slip the selection of the LTS baseline two weeks to, or four weeks to allow us more time to resolve regressions. A number of regressions have been resolved. A very nice set have been resolved, but there are still a few that I feel like, hey, we should really consider fixing these before we choose the weekly baseline. But Tim is the release officer and that, that really is his decision to make. We defer to him. We try to persuade et cetera, but as the release officer, he gets to make that decision. So right now we're one week off cycle from the LTS baseline. It would have been selected last week per the usual calendar. Now the question is, what do we do next? Do we choose three, four, five that was delivered today, three, four, four that was delivered last week, or wait one more week and try to get a few more fixes in. That was all I had Damien. Thanks. That's clear. Any question? One, two, okay. I don't have any other announcements. Do you have some folks? Nope. Okay. So let's get started. First of all, let's cover as usual, the amount of tasks that were closed or done during the past milestone during the last week. So we had a bunch of minor tasks that we won't cover in details that are usual operationals. People wanting to be opt in or opt out of plugin maintenance. DNS updates on some records. Just a few words. So you have the direct link to the milestone with the close issue. If you want to check the details. I'm going to just cover some major elements. We were able to fix issues on the packer image generation. So as a reminder, this is a process used to generate the templates of all the virtual machines we use as agent on CIG and Kinsayo. It's a centralized repository that provide for the Ubuntu and Windows lines that try to provide the same templates. We are using Azure or AWS machine. That might also build Docker container in the future to be sure we have the same set of toolings, whatever kind of agent we have. Right now it's on virtual machines. That process was failing due to a GPG issue that was fixed. That was a minor but blocking inconvenience. That allowed us to deliver new template version. Two minor version were deployed in the past two days. So now we have the latest Git and Git LFS patches. Git LFS patch was very important for Windows image because it was fixing a CVE. And the latest GDK17 from Adoptium has now been deployed on these images earlier today. So if there is an issue, don't hesitate. As usual to open a desk issue, we can roll back quite quite quickly on this one. Is there any question, remark or things unclear on that topic? Okay. A word of thanks a lot, everyone. Stefan for contributing on the VPN tasks. So as usual, every six months, we have a regular task to update the CRL, the certificate revocation list of our infrastructure VPN. That was a good opportunity for knowledge sharing and improvements. So the goal was to let two member of the team that are not Olivier, Mark or I, who usually do that task. So they did that in complete autonomy. So thanks for taking care of that for the improvement on the documentation. And that was a good opportunity for taking care of Vadek renewal. I mean, it's not as if Vadek had the needs to access the VPN. So he confirmed that he was able to go forward. So thanks, folks. A word on issues, Jen Kinsayo, that has been upgraded the without further notice to the latest LTS. Sounds like we had a miscomprehension with the Linux foundation folks. So I've written that on the issues, but I'm seeing that a lot for everyone there. When they propose a date time, it's PST West Coast. Good to know, because until we realized that it was that time zone, the team already proceeded and the person in charge of communication was off the day we should have delayed. So better to know next time. For once it's not European centered. So it's not that bad. So they, they updated to the latest LTS. I was your instance. So still that issues, the Jenkins.io. There have been 10 minutes of unavailability. So sorry for our users because we weren't able to let them know. So I hope no one lost a comment. If that's, I'm really sorry for that. That's something that happened yearly that LTS updates that was triggered by the check inside the instance. So everything so far so good based on team analysis. He was able to finish the update in the U administration. I haven't seen any user complaint. So either no one can post to Jira or it's working as expected. Or we haven't seen any problem. Questions so far on these two peaks. And for me. Digital ocean cluster has been recreated is now used by Jenkins.io. Thanks a lot, Stefan for the support on that area. That was an opportunity to go to Kubernetes 1.21 starting the great campaign. We applied what has been described on the associated issue. The credential have been rotated everywhere as expected. That has been documented. And we have restricted the area. So if we have another, Let's say suspicious activity that will be way easier to track next time. By plane library. We just walked around some improvements. Thanks Basil for starting a big cleanup of that groovy code. That's a good learning opportunity also. So we spent some times with Stefan last week and today around the groovy string interpolation fixes. So. I won't go into details, but we are available to help, but just think about that element in the future. A lot of, I learned a lot of from your pull request as well, Basil. So thanks again. And also thanks to Daniel Beck who helped us to track an issue about tags of Docker images not being built. Because we were creating normal git tag. And the git tag has the same by default timestamp as the commit that it points to. So we had some version that were hold on the repository where we weren't upgrading the principle branch. Now, thanks to the help to the analysis of Daniel and the help of survey on implementing that part. We are creating annotated tag that a timestamp at the day where the tag is created, leading on more deterministic bill behaviors. Finally, in fra reports generation, which is used by the repository permission of data. That's a regular job that takes care of updating the authorization and account and permission on GitHub repository for plugin developers and activity factory has been fully migrated to infrasci. It was running on trusted CI before creating a lot of chaos there and costing a lot of money. So now it's running on pods on one CPU one giga pods that costs almost nothing. And so far so good. It took some time to compare that with the help of some contributor. If anything is wrong related your permission as plugin or contributor developers, please reach out to the infras team on the desk. We might have broken elements. So far the reports were exactly the same during the past five days. The rest of the issues are, let's say minor issues. So I propose that we go on the walk in progress. The first one is a one big one. We had a. Notage and CI Jenkins IE last week. So initially I invited Basil to do the post mortem today. I'm really sorry because I messed up me at my agenda during the weekend. So my proposal is that we do a separated session altogether that should be publicly hosted to focus on the topic of the post mortem. Because we are not mature enough as a team on the post mortem exercise. So we could totally improve and learn from Basil. And there is a technical a set of technical elements short term medium term and long term. That should benefit the world Jenkins core. Or at least some plugins. So that's interesting to discuss that to share that knowledge altogether. That will be the goal of such elements. So what happens briefly is that we had a wave of bills. We had so punch like more than 1,500 bills waiting in the Q and CI Jenkins IE. The root cause was we reached again the docker rate limiting. So each of these bills was trying to start a container. But while trying to pull the associated image from the docker hub, we were API height limited because we reached our authenticated account limits, which is a six hour windows that change and resets the count. We are in discussion with Docker. They are okay to let all of our organization on account to be part of their open source program. That would allow us to have more than 5,000 pool per hour instead of 200s per six hours today. That will be a great improvement that should avoid that thing to happen again. There are other solutions, of course, using local proxy using other registry. But right now that was the root cause. Then CI Jenkins IE was completely, completely lagging. The bills were taking over, but it seems that the UI and the logs were not updated to the reality of what was happening. The bills were taking place while the UI said the agent were stopped or paused or suspended. And absolutely no error logs, no warning logs, no messages around this behavior. That was really weird to understand. I think the team has reached maximum knowledge of Jenkins. That's why we need, let's say, the professional experts like Basil to step into there to help us because we need that help to fully understand what is going on. To be able then to decide what to do in short term if it happens again in the next days and what to do in long term to fix that behavior. Because if we have that behavior, other users of Jenkins for sure will have this. We are not specific, we just have a big Jenkins instance, but that's not the biggest. So let's say it's our own dog food and go forward. I really want to thank Stefan and Basil for the help you provided on that, that helped me a lot. So many, many thanks for it. That was really, really nice to have you on board. Damien, one of my concerns and you didn't mention it. So I'm wondering if maybe I'm thinking in a different area. We've had an ongoing issue for the last 18 months or 24 months of agents that disconnect at random cloud agents. Did that have any impact on this or not as far as you could tell the random disconnects of agents at unpredictable times. Well, is that unrelated to this? I cannot say I'm sure I'm not sure, but you haven't seen these messages on the logs of the impacted. I see. Okay. Thank you. Yep. The logs were about agents not being located. Okay. All right. So they were agent allocation failures, not jobs due to disconnects, not job failures due to disconnect of the agent. Okay. Yeah. Yeah. To be more precise, I think both issues do exist, but they aren't caused by each other there. So they're, they're both existing, but separate issues. Got it. Okay. Thank you. Thanks for the clarity. Excellent. Yeah. That better to clarify my English might not be always the best. So don't say it's too. To help. That's all for today on that area where I still work in progress. That issue is still open because we need that post more time to happen and we need to decide on tasks to improve the situation. Is that okay for everyone? Does everyone agree on that area? Just raise your hands. I see you on the video. If you are okay. I already, I already wrote a quick document. And I think there's five different issues that I saw during that. And so I have a draft of what I think the short, medium and long-term. Action items are for each of those five issues. And I'd be happy to discuss that at the meeting that you're closing. I could share that at a time if you'd like as well. However, the best you prefer if you want to, to polish your draft before sharing it publicly, no problem. Ideally, if we could share of the draft before the meeting so everyone can read it and code the meeting with the brain already wired. Thanks a lot. Many thanks for that. Where should I, where should I post that? If you, if you want it to be publicly and you don't mind, they recommend the associated issue. Let me have a link there. That's the issue you opened on the L desk. Sure. Okay. Sounds good to you. Yes. Thank you. Anything else on that topic? So I'm going forward on the other topics that are work in progress. And that we should keep going for the next week or decide if we block them in favor of other tasks. There is an upcoming issue with the docker image tags, LTS, whatever LTS something that are not published for other platform than Intel. And it's almost fixed. We found the root cause that was a chaotic branch being built in production on trusted CI that should not have been that was building with an old docker version of a writing the manifests the modern manifest version to new ones. So it was indirectly making unavailable the IRM and CPUs. So team took care of the root cause. And we have a set of shell script madness to be sure that now we check for each version on the latest, let's say a few weeks before we check that they are published at platform level and all the tags level now. So that should be almost there. It's blocking the availability of today's weekly docker image, of course, because that script of publication must be fixed and fully working. So it's continue on the next milestone. If anyone is in the recident contributing and helping me don't hesitate to manifest yourself on the associate division that you can find on the milestone. So we've got a an LTS that I believe is scheduled for next week. Damien, are you comfortable that we'll be okay delivering that we'll be ready for, for instance, the next weekly on May, Tuesday, May 3rd. Yes. Okay, great. We have the Kubernetes upgrade campaign to one done 21. So it's managed by Irving and Stefan. I understand. So we decided two weeks ago that is leading and Stefan is shadowing Irving as a matter of knowledge transfer. Initially it should have been all the cluster without me. Due to last week chaos. Sorry, I stole your task folks, but now there is the whole AKS that should happen tomorrow morning. Is that correct, Irving? Yes. Cool. So I let you folk manage that part. Thanks a lot. So we keep that for the next milestone. Of course, Stefan and I are working on the docker up credential. We have multiple areas there documenting the different account, securing the different account, ensuring the open source program is applied for the rate limiting and fixing the pipeline library. So each controller of the infrastructure use one credential for pool and another credential for push to separate concerns and rights. That's an ongoing task that should continue this week. Did I forget anything Stefan or is that you can specify that we may use credential with read only for pool, to avoid any stalling credentials and pushing wrong image. Yeah, good catch. That's a recent docker up feature released two weeks ago. Now three accounts are able to have read only token. That feature was only for enterprise or paid account last month and now we can use it for free, even without the open source program. So if you have a free docker on a backup, you can do that on your own. So that's a good news. There is a read write and read write delete. They have four or five sets of permission. That's really useful. Good catch, Stephen. Thanks. So we continue working on that. Stefan, on your own, you have to continue working on migrating, rating the Jenkins application from the current AWS VM to a Kubernetes system. Sorry for the delay. No problem. That's a non priority task, but a good knowledge transfer. So we delay to that milestone. You were ill and off most of the past two weeks. So that makes sense. Sorry. The GDK17 campaign that has been updated on the tools for CI Jenkins.io and virtual machines. So now I have two points. First, the container agents that are still partly built by self and partly built from the community. That's related to another task survey that was building our own Windows container inside the Infra CI. Do you want to, do you think that it's possible for the next milestone to start working on that one? Yes? Yes. So the GDK17 campaign is parallel to that task at the same area. I don't know if someone volunteers to track all the Docker images we have running. That might be, the Linux are easy and the windows might be a bit more slow because related to external requests to the Jenkins CI. But we have to track them. Is there anyone volunteering to that task or should I take it? Okay. I take it then. And the last one is to add an email alias for press so that we don't put that one on an upcoming milestone. Sounds good for you, everyone? Yes. Okay. If you don't mind, but I had two, two small points to add to this agenda. I want to give an update on two things that I was doing. One is memory analysis of core and core request builds. So I've recently checked in some changes to add. Observability to the JVM memory options. And I'm planning on doing some more analysis on that to understand more clearly how much memory we're using in each build across all of our JVMs. My suspicion is that we may be, there may be some areas that we can optimize our memory usage. For example, depending on how much memory we're using, we might be able to turn that down with some explicit configuration. And the other thing that I'm trying to determine is once we understand how much memory we're using, my question is, are we giving ourselves containers that are too small or too big for our actual workload? So I don't know the answer to that yet because I haven't quantified every single JVMs, heaps of them, but I don't know how much memory we're using. I haven't quantified every single JVMs heaps size. But once I do, I want to determine, is this 8 gigabyte container VM, is it too large? Is it too small? If it's too large, can we save money by shrinking it to a smaller size? If it's too small, could that be one of the reasons why these builds are failing? So these are the questions that I'm seeking to answer. And I'll probably have more of an update on that in the next week or so once I'm able to do some more analysis there. But I think now I finally have all the information that I need to start digging into this. So that's a small update there. I'll pause if you had any questions for me about that. Okay. Okay. I don't think there's any questions. So the other thing. Yeah. Just one question. How are you measuring it? Are you using open telemetry or your own? Much more primitive than that. Yeah. Yeah. I'm just having the JVM print. It's what it's decided to use with a simple command line flag. And then I have a piece of paper and I'm adding up all of the numbers. So it's really nothing, nothing advanced or sophisticated. All I'm, all I'm doing is looking at the, the heap size for each JVM and adding them all up. And in my experience, that's pretty close to the, the actual usage. I'm not going to dig deeper into those until I know that I need to. For now, just the big, I'm just a very high level overview is sufficient for me. So. Just before you jump to the next point, then I proposed to the whole team. Given the help you did and the work you are doing. I propose that we open access for you. To two areas. So we add you to the administrator. If it's okay for you. And to the SSH machine. So you. So I will open an L desk issue and mention you because there might be some tasks on your own to be done, but I will describe them. That should allow you to work on the controller itself. And since you're working on the JVM metrics, I propose that we add you as well on the data. Organization. So you can see the metrics that are sent to data dog, especially the memory metrics that should help you a lot. I guess. Oh yes. That would be great. I didn't even know we had those, but I'm very excited. I would be very interested in looking at those. That's much better than my primitive. Command. No, no problem. Just one thing. If you need anything else, I would be very happy to do that. Thank you. No, no problem. Just one thing. If you need anything that we, that could help you on the analysis, please shoot and ask. We can give a lot of access if needed. We don't have an exhaustive lists. That's an improvement for the documentation team. But if you see that you feel completely locked out of the system, please ask us, we can open access even temporary ones. But if it's easier for you, that will help everyone. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. I'm going to do it some more. Damian before, before Basel continues, can I ask for a vote there? Irving, Stefan Bruno, any objections from you to granting access to see it. Jenkins. You have my agree. Okay. Great. Thank you. I'll just record that then. Thanks. Thanks, Mark. Great. So the other, the other thing that I've been. You might have seen that I've been refactoring a lot of things. And I'm not just doing that for fun. There's actually a chains that I'm working on making that has required some refactoring as a prerequisite. And that is enabling the use of the Maven wrapper, which is something that many people have requested in Jenkins core. And instead of, instead of running the Maven that's installed in the virtual machine or container, many developers would like to run this Maven wrapper instead, which will download a version of Maven that's defined in the repository of the code under test. And so that, for example, that would have helped us in the recent past when we upgraded Maven and found their regression. You know, we had to roll that back at the infrastructure level. But you know, if we were using the Maven wrapper, we would have been able to roll that back at the application level. And that might, and the reason that that's important for new contributors is that it also applies to local development as well as CI builds. And so in order to update the pipeline shared library to support the Maven wrapper, I've been kind of refactoring some of the interfaces in this file in fred.groovy, which has a lot of methods that are used when doing checkouts and running Java and running Maven. So I'm hoping to put out a pull request soon that kind of changes some of the semantics of these functions to allow them to be used with a Maven wrapper instead. I still have a little bit more testing to do. And doing it in a way that preserves backward compatibility is a little bit tricky. So I'm still working on that change. But I hope to finish that in the next week or so. And that will hopefully make it easier for new contributors to start working on Jenkins without having to set up the right version of Maven ahead of time. Even if they are backwards, it's too difficult to obtain. We can also see if we can change or use of it to allow is there a modification? And we are the main user of this pipeline library. Right, right. We have to be backward compatible. Absolutely we can see what we can do around it. Yeah, I've been searching for usages. And if I think that they're easy to update, then I'll just simply file a pull request and update the usages if necessary. Now, this would, this buzzer would allow me on the, on as a plugin maintainer to also eventually use the Maven wrapper so that not just Jenkins core, but plugins would come with the benefit. Yeah, exactly. So the, the, there's two consumers of this run Maven function. Well, not just two, but two primary consumers. There's the build plugin.groovy, which is basically the Jenkins file for all plugins. And then there's the core Jenkins file. I think those are the two primary consumers. There are many other smaller consumers as well. But the idea behind my change is that the, the change would be a generic change to this run Maven function. So it's such that it would apply to all consumers, including plugins. So that's interesting for just for the knowledge transfer. We were targeting more and more to focus on only the packer process to generate the element and eventually using the Jenkins integrity tools. The reason was trying to avoid downloading most of the time Maven on each time, but that will go against the case of, yes, but I don't have that exact version. So having that feature is really useful. We were also trying to work on the ensuring that the tooling used to build come from a trusted source. So I guess that's completely in the same area because my plan library is something that is trusted. The people committing code there need review approval tests. It's not perfect. Of course there are always flows, but that's clearly a trusted, trusted area. So that's interesting because it's still compliant with that. Let's say security concern to be sure that since it's a public facing system, anyone being able to jump on that and trying to man in the middle downloads of tooling does not propagate suspicious binaries there. I can look into it further and confirm that there's not not any concern. But what I suspect, what I would hope to be the case is that the downloaded, that when Maven wrapper downloads Maven that it would compare the downloaded version where they checked some, that is checked into the Git repository. That is what I have experienced in the past with the Gradle wrapper. And I would expect or hope that the same would be true for the Maven one. So I can look into that further and confirm what kind of validation they're doing on the downloaded version of Maven. Okay. There is no need to add one at once because we don't do on the backup images as for today. That's a medium term objective we had trying to generalize. So as soon as we know that the Maven version can be, has to be changed if there is a CV or an issue. There are two only locations. The template for the agent that should be centralized in upcoming weeks and the pipeline library. That's more than fine for me. That's enough because it's just, we need to be able to know where are the usages. And there you already did the job there. So that's really useful. Yeah, I mean, there could be a concern there if this ends up being adopted broadly by plugins, by plugin maintainers who decide to use the Maven wrapper, but then declined to keep it up to date. But that would be if, if that, if that happens, then that might be a concern for us. So we might want to discourage them from, from using it. If we don't think that they're responsible about updating it, but that's to be fair. That same concern applies to any dependency that's pulled in by a plugin. So it's not, it's not unique to the Maven wrapper. It's just maybe one, one example of that. Yes. And that's a good point you're making that's a fine balance because compared to the amount of time where we broke the usage of the end users by wanting, wanting to be up to date. I mean, we have to find a balance and that direction is nice if it fits the users. Many thanks for that. Yes. Is it okay if we add a desk issues tracking this information after the meeting and we link them back just for us to be able to have an idea and to be able to track that on the desk. Is it okay for you, Basil? Yes, that's fine. Many thanks. And Damien, I had one item as well that I, I tucked into the upper, upper section just a, or are you okay if I bring a topic? So, so up earlier in the topic in the completed, we've now added a C name record to the DNS that is crowd in dot Jenkins.io to help us do a more effective job translating Jenkins components into various local languages. So French, German, Italian, Japanese, et cetera. And the brand is known as not my fault on GitHub is helping us go through this exercise. We'll do a Jenkins online meetup to show just how much better it is to use this tool to translate than the current process of edit property files manually suffer terribly. After a great and enormous pain, finally get your change merged. This is a much, much better way of doing things. Yes. No need to show anything. I just wanted to be sure that the info team is aware that, that we're doing this effort. It's not done yet. It's still very much in prototype phase, right? We've had a few plugins that we've translated. It's working very, very well, but we've got to get the authentication worked out correctly. We've got more changes that need to be done. It just looks very promising. There is an item. Thanks. That's a good point because I was alone that day. So yeah, we added the DNS record to make it better. There is that ongoing work that's really important to, to share it. And I know that there have been an item request from Alex. He was asking if we had some kind of EC so to centralized, which we don't really have the help for us for today. So the answer is no. Well, and I like Tim's answer on that. Let's just use GitHub off. Like we do other places. Let's, let's delegate that, delegate that problem to somebody else who has whole teams. I'm sure who worry about making that thing work. As soon as GitHub doesn't leak the old token related. No, no, no, no, no. Yes. Yes. Yes. Okay. Even they are imperfect. That was only a joke. No, no worry. I wasn't. So, so to everybody's question, will, will plugins, will it replace the translation assistance plugin easily and with, with great improvement in experience? Yeah. So now we want, we want testers to tell us because the languages I speak fluently are one. And it's already the native language of Jenkins. Therefore it doesn't help. Whereas those of you who natively speak French or other languages, it's, it'll be a great help. And I think that's one of the heaviest part of the test. Yeah. I noticed it. On the dummy plugin, the help us localize this page. And so this pop in this pop up. Coming in and yeah, I never saw it before. I wonder what the crowd in the integration. I, and I don't, I don't know that that pop up is from. You know, it's the old translation plugin. Right. When you are in dark mode, it's completely white and there are many things I think the link is in the GPS and something like that. I'm not sure it's, yeah. Right. So, so we would love, there will certainly be plenty of questions to make that more visible. But given the success of the Hacktoberfest 2021 French translation effort led by Angelique Jarred and Duchess France. I think that we've got good candidates to help us do a really good job of making the translation experience much better for Jenkins contributors. Thanks a lot folks. And I have the action item to schedule a Jenkins online meetup where we'll have invite Alex to come in and others and we'll have a conversation with developers and contributors about how this works and why it's so much better than, than using property files directly from your editor. That's it for me. Thanks. Thanks a lot. Just a few new item to add to this milestone. We had a request from someone proposing to add a Jenkins mirror in Singapore. So thanks for adding the issue. So that's something that will be treated. If there are, if there is someone motivated, I will add it to the milestone and you won't interested. Assign yourself to the issue and I will pair with them. And did we ever get any traction on the Alibaba mirror in China? Never had any answer. They never replied. Okay. So Singapore is another very attractive one because they're close to India, right? They're, they would reduce some of the load on the Tsinghua University servers that are taking just a terrible load from all the, all the downloads in India. So Singapore is very, very interesting to, to share the load on helping the people in India who use Jenkins. Okay. But nice thing to reminding us about the Alibaba. It was, it was left my mind. To ask them what is the status of the mirror. There has been a request from a leak about replacing the default display URL from the Jenkins check and notification for CI Jenkins IO. So that's developer are not redirected to Blue Ocean, but to the usual UI instead. It's not a request to remove Blue Ocean from CI Jenkins IO. I'm not sure if it's technically possible to, I assume there is a GitHub check set up to be done somewhere. So that's something that we should start. There's a Java property that you could set. If I remember from reading the ticket, that would determine which, which UI you would get by default. Oh, cool. Okay. Okay. So anyone interested. So is it okay if we add it to the next milestone? Yes. I mean, that doesn't seem, if it's only a Java or sitting a Java property that should be quite easy to test and apply. Yeah, it's been three or four weeks that you leak asked for that. So that makes sense in given the direction Blue Ocean is taking. What do we have? Then we have the sort of build our own windows image. So wherever you hope for, for the task for that one. Just change Blue Ocean link. We have to build our own. And I think that's already a pretty packed milestone. If it's okay for everyone, is there other topics you want to add from the, this one that we could have forgotten, or do you want to work on one, two, three. Okay. I'm adding one for myself. I'm setting me Ross Jenkins. Because that one is causing some trouble. So that will be. That's when I started. I need to write on the blog post. That will be hard for me. So I need you to let me review my terrible language, please. I need that. Is that all okay for everyone. Is there any other question? Let me stop the recording. Okay.