 Hey, mister. Oh, you're right There we go. Yeah, I was saying that that shirt is one of my favorite shirts Yep, I could hear you. Yeah, I was saying that that's one of my favorite shirts Nice the run case. Yeah. Mm-hmm. I'm in I'm in my lawyer You're breaking up. Thank you. Can you be better if I go to 5g? That sounds nice Okay, good I think I'm in I'm in my lawyer Switzerland this week No, nice. I feel like I'm in a hotel where they might have filmed the shining or something like a trippy like it's called the Maloya Palace and it's a It's really old wooden hotel. I'm good. I'm hunting for the email that Amy sent out yesterday That has all the details for today I thought that was a couple days ago Maybe it was you know all the days they just start running together after a time There was one from yesterday It's probably on the mailing list That'd be entirely too sensible I know look for look for the places where things are expected to be Yes, you'll never find them there. Let's try it. Yeah Discard it You know Ricardo Daniels with us. Yeah Hello Hello folks Yeah, I've got many people yet I'm trying to fill up There it goes Video gets stuck this morning. Probably something new hooray. We'll get a few more minutes Some folks keep joining us so great We were just appreciating the fact that you that you're very good about sending your mails in a place where we can find them So that was very nice Yeah, no, and I actually did something yesterday that I should have done many many years ago And now all of the rewards just go towards like here Like this particular address is now just forever and ever now going to like go to the right place instead of like, you know Google forms is not in fact your friend when you're copying and pasting things over who knew It was a Monday, but I fixed it great Yeah, and I've actually put the slides here as well So folks know like where to be able to go for that. So well, that's kind of how we'll wrap up today exactly Good fun for everybody. I Expect lots of wonderful nominations, please as do I Do we have enough folks to get started seeing some TOC members? I'm just quickly running through. We don't have quite enough TOC members, but we also don't need corn today. So Okay, oh, let's go ahead All right Welcome everyone. It's September 19th. It's the TOC public meeting today Thanks for joining us today. We're gonna talk about sandbox projects Sandbox annual reviews specifically and how some of the changes to the process that were proposed previously or in summer have went Yesterday the tags should have Completed as many of the annual reviews as they could have based off of the Conducting Sandbox annual reviews document that was worked on by a small cohort of individuals in the ecosystem The intent was to ensure that these didn't take more than maybe an hour to two hours per project by Contributor from a tag doing these reviews. We do know that we had a lot that were submitted I think we ended up with over 30 at one point that were as you can see on the slide not Equally distributed amongst all the tags, which is often reflective of our ecosystem So today we wanted to understand a little bit more about how everybody felt the process went whether or not This is going to be sustainable moving forward. There were a lot of great ideas from that cohort that put together the process However, as with most things in engineering We don't really know what works until we try it out So I'm interested to hear from the tag chairs and technical leads and other participants in this process how they felt this went What feedback they had any challenges were they able to complete them all I suspect the answer to that one is a no and I will turn it over to you all I Can probably go for tag runtime Hi, well, I'm Rajas So for from a tag runtime perspective we had over 14 reviews to go over And we did have a couple of challenges mostly from Contributor bandwidth and lack of volunteers from tag runtime to go through like 14 reviews as of today out of the 14 reviews we have Around two which we could complete End to end and then I say complete end to end like a review comment posted on the PR We have we still have four Pending projects wherein we've still not started the reviews as of now mostly because we didn't have contributors and For some of them we reached out to tag environmental sustainability and they were kind enough to Like some of the contributors from there were like kind enough to help us out as well But then these are still in progress and yet to be reviewed We Again going following up on that number we have another like five of the Projects where in we are still waiting for follow-ups from contributors who have reviewed it and then you know going through a review from like say tag chairs or leads and then Again going for a review from to to seal your song and then Getting back to the PR and then we have three other Projects where in the contributors reviewed the project But then we're still waiting for a final act or a remediation or a review from the tears nearly a song So that's that's kind of where we are from a tag runtime perspective. All of what I've said is Collated in this sheet. Yeah, thanks for putting that out Ricardo The other thing that I also wanted to call out was we faced a couple of challenges in terms of how it was difficult in terms of reviewing Projects which had the reviews written for 2022, but then we were reviewing them now So that the data was kind of not consistent for example for succum Arda, they have Applied for incubation and then some of the data that was put it put out in the 2022 annual review was not consistent with the incubation Proposal as of now which which makes sense But then it it made difficult to go back and forth on the current state and the state from 2022 and things like that so This is where we are. I think We could have done better in terms of like reaching out to other tags and distributing this Lord from tag runtime That's that's what I feel But then yeah, these are some of my thoughts Based on the experience that we have had That's fantastic. So I want to dive into some a little bit more of the meat of that So in addition to the contributor bandwidth and availability of contributors that that drove a lot of the kind of completion challenges and coordination issues Do you think that the tag either the chairs the contributors got a better understanding of the projects themselves and kind of where They were at was it a good exposure and learning opportunity between tag members and projects? Absolutely one thing that I really missed was how meticulously the The process document was laid out in order to conduct the review that was really good that helped a lot so thanks for doing that and Going through the reviews and getting this this definitely helped getting more contributors to tag runtime in terms of like getting Then acquainted with all the projects and then going through that that that truly helps. So yes, that was very much fulfilling and Hey, what about others? I'm curious to hear how the other tags experience was well, I can talk briefly about tag app delivery we only had four and I Know three of them. We don't have there's progress, but they haven't been completed I really like the tag runtime. I just wanted to say that really fast just Ricardo Thanks for sharing that just looking at the documents and the review documents just quickly that It's awesome being able to see that And it'd be nice to have a template going forward with each of those. That's anyway so for me, I was looking at CD kates and I There were a couple of process things as I was going through it For example Like what you were just saying about chromata there's some Things that stood out that do need remediation or for the TOC to look at so I I put that in the issue in the tag app delivery But I wasn't sure if I needed to put that on the issue in the TOC So just little process things that can be ironed out and then going forward. They'll go a lot faster But I can see that the next time we meet to discuss, you know, kind of process and what worked and what didn't We'll have a lot of feedback Just a streamline and then make a checklist Mm-hmm Did you all feel for the ones that you started in progress that you were getting a good connection back with the project getting the project more engaged probably with tag app delivery or A better understanding of how the project works better exposure within the tag Yes, and that's where I also wasn't sure and we weren't sure whether we should How engaged we should be with the project Whether walking them through what the thought process is. I mean if it's supposed to only be an hour or two I mean the engineers and the maintainers were really responsive So that was really good to see But how much engagement before the review is complete and then inviting them to Come participate in various ways. I guess those would be Kind of some outstanding questions Okay Um, other tags or even maintainers on the call that participate in this I'd be curious to hear from them as well Um, I can give like a very quick Summary on from the storage side that it's It's not being a large numbers as you can see, but I think it has Probably being positive in that it's helped engage with the projects that we had To a certain extent lost touch with sort of force the contact points, which was good It's I think we're going to probably need to think about how we're going to scale this if the numbers grow I'm kind of wondering If we need to You know, we're gonna need to as much as possible process and template types this and whatever else to kind of make it Low touch But it also got me thinking as to Some of the things that we can take out of this so You know, I I feel like it would be As we go through the first batch it would be a good idea to Collect some stats in terms of Maybe some of the common outcomes like what what do we want to do What was what was the recommendation? For example, you know, what where they're Where the project's actually being on a good track or not on a good track, you know and kind of gets Kind of use that as as maybe Input into the into the Selection process for sandboxes up front, for example, you know to kind of say look if 80 or 90 percent of sandbox projects Maybe are not going to plan or maybe they are right But have no idea what the numbers actually are but but we can kind of make those calls then to sort of say look do we Need to tighten up the criteria earlier on in the process or do we need to loosen them or do we need to think of them as something different, you know I I I think we We should use this as a as a guidance point because The sandbox Process is a kind of a bit of an experiment in itself, right and We've we've grown the portfolio hugely in the last year The way beyond probably our initial expectations And we wish we should kind of use that as a level set. I think or use this as an opportunity to level set Also using, you know by level setting we can probably Set out some guidance to projects to kind of say look these are the things that 80% of the projects get called up on and their animal reviews So these are the things you should be working towards in your first year as a sandbox projects, you know to kind of improve the past rates if you wish This is great feedback. So I'm gonna summarize kind of a little bit about what I'm hearing We still have bandwidth and availability issues within the tags and it doesn't sound like With the volume of sandbox and even incubating and graduated projects because we've had discussions within the community around Annual reviews for those to make sure that we do have touch points make sure they're healthy and that they're achieving the things that they want That it's not sustainable at scale for what we have However, the engagement and the enrichment Between the tag members and the projects is potentially a positive here That's what it sounds like being able to engage with them understand more about the projects where they're at and being able to proactively help them So identifying issues earlier on this is like the whole concept of shift left in security. So we're talking about shifting left for project maturity That makes a lot of sense to me, but we need to do this in an automated fashion I think leveraging some of the the information and the learnings that we have here around where projects are getting stuck is Beneficial how does that align with the guidepost? I can see this being it's a valuable input to furthering some of the discussions on on the moving levels process and some of the criteria changes some of the template changes that can go on there What else are from going through this exercise like Does it sound like? Having a function to check in with projects maybe not necessarily as an annual review mechanism, but something that's more Automated low friction puts more of the Interactivity on the projects that are already doing good things to kind of self manage and self sustain the way that we allow self-governance to occur in those projects that Don't have the same amount of activity or may not be as responsive Having that be the force of function to come back into the tags to have these discussions to get them back on the right path Um So sorry to speak up again, but like a few of the things could be automated, you know so for example where we are giving guidance to say, you know Make your community meetings public and Have a slack channel group somewhere and things like that. Those are the sort of things that They can be templated and the projects Can actually register those things in a simple place in github somewhere And then they can be checked automatically and it kind of is sort of like almost self-checking So like for example, if they do have a slack group, you know, they can register that summer A bot can check the number of members and you know, the number of messages for example and can gauge Um, can gauge engagement that way And also like, you know, make sure that public meetings are registered somewhere in a google doc or whatever and You know track updates, but it would be it would be nice to kind of template these things so that the the um, which and by the way the a lot of these things which we're measuring Um Probably also the same sort of things that the projects needs to build a community in the first place as well So, you know, this is probably kind of like a little nudge But if they want to build the number of members They're probably need to do some of this stuff anyway Yep, that will make sense Do you see members or liaisons that participated in this or had the opportunity to From annual review completion. Do you have any feedback? I think just kind of iterating on let's automate this because this is not sustainable at all And it also took a lot of time I think more than what I expected Because I think it depends like it depended on the community member who was reviewing the pr If someone is completely new to the community and they just wanted to help out that was nice Like you love to see their interest I think feedback from such community members. It required a lot of back and forth Which is fine, but it also takes time So I think It also depends on the kind of volunteers we get and so on. Um, so yes, let's just please automate this Great, and if I could just add something Emily. Um, yeah, so from the cncf staff side We actually did have a great meeting on friday to discuss Um, potentially how the the larger lfx platform can help with automation and data gathering So, uh, we've given them details on current pain points the list of things that are in the github issue from krishna and um, they're looking at potentially ways to To proactively get some of this information. Um, kind of call out things and make it available So we're recording this meeting. We're sharing it with the lfx team So hopefully we'll we'll have something here to to make this easier for to see the tags and the projects themselves That's fantastic to hear um All right, so we've gone through this activity. We've gotten some feedback. I think Definitely the automation need is is there Um, we do still have projects submitting annual reviews. Um, the club monitor bot is going actually out and and pinging them Um, leo had a question around how much less work is this for the toc now? And it sounds like there's still a lot of work for the liaisons and the tag chairs So I definitely agree that there's still a lot of work that needs to happen here There's still a lot of coordination So we may not have reduced the the amount of effort for all all individuals involved We might have just distributed it a little bit more and given that we're already strapped for contributor resources in time Um, there might be something that we need to do here and rickardo. I'll I'll let you speak I was gonna say something about automation, but yeah, there there are Several things, you know that could be automated. Um, but obviously it takes a lot of work. I mean, there's the aspect of Pinging the reviewers Maybe the suggestion of having something on slack or a slack channel. We tag runtime. It has a slack channel that we created for the reviews We could have a bot that actually pings the Assignee is um, or a reminder, you know to To complete the review or to show some progress So so there's that aspect. Um, additionally, there could be some github Way to assign their review to a person and Have a real bot to remind as well The other aspect is the request for information from the projects. So I'm not really sure what the What can be done there, but I mean some some of the folks might have more suggestions But it will be nice to have an automated way to request information from the projects When someone is reviewing the project Yeah, so those are some of the aspects that I can take of automation, but I think we can distill all the different ones In a follow-up. I was just thinking maybe like a like a sort of post-mortem type of Exercise after All of these reviews are done Okay um What about the content in the instructions? So we got some feedback that the instructions that were provided were pretty clear for the most part There was still some confusion areas But the overall content are reviewing the project scope and the goals community development project governance long-term planning Those areas collaboration. How is the project integrating with other projects in the ecosystem? Do we feel that that content from the conducting sandbox and your reviews document was worthwhile? And engaging with some of these projects and ascertaining where they're at or where they may be getting stuck Where's there something that was missing? Go ahead It did feel like that the content in the doc was pretty much helpful in conducting the review My question is whether the the template for The annual review would reflect The instructions said in that in the document If if that's something that we would be doing like that would be great so that we can it would be easier for us to automate as well Yep, completely agree. We didn't have the time to be able to get to that with annual reviews already being submitted, but definitely Karina You're muted still sorry I was just looking for which section it is. I thought the instructions were good where I was having a hang up is when it wasn't A quick check box that yes, they're doing great. Everything's wonderful tag, you know recommends things that You know the review is complete So I think it's section it's five I'm looking Maybe I'll find it later but if you have problems with Or if it looks like the project is having issues That need to be remediated The process there Is it more go ahead and Submit some issues to the project. Hey, look at these things. Or is it purely? Hey To see liaisons. This is what we saw Can you help and do something about it? So how what are the boundaries how much interaction with the project when it comes to that Um, those are my questions Yeah, I think um based off of how we had it written in a belief section five is the right one from the document Is that the the to see liaisons would be primarily responsible for engaging with the project on some of those issues? But I don't think we actually fully explored what they are Um, I can see some issues being just guidance from the tag to the project if they're domain specific But I can also see some of the other ones that are more around governance that it's Not necessarily tag run times responsibility to take on but maybe directing them back to attack contributors strategy in that Their to see liaison may be a mechanism to do that or the tag chairs Um, so it sounds like we need some resolution for when problems are uncovered associated with projects or even just challenges They don't all have to be problems indicators of problems to come but Definitely agree with like simplifying the easier path if a project looks Like they're they're on the right track that the the feedbacks have been positive. We've we've not seen anything that's untoward or Questionable and in their practices. They seem to be doing fairly well. That should be a much simpler path But as we start uncovering some of this stuff, um, whether that's lack of engagement Not clear indicators missing documentation or governance lack of activity associated with the project That should warrant a more in-depth engagement probably with them either by the tag By a toc member, um, or even just a general community member looking to help out so How do others feel about that moving moving it more into like um, a Whole mechanism alex came off mute I I was just wondering um What's sort of the grace periods? So so for example um Can sandbox projects still be embryonic after a year? And it's that bad But it may be after two years or three years is it's really bad and needs remediation like I You know what I mean? Like where is there a line? Yeah, that's actually not something that we have it and toc members I'd be curious to hear your thoughts on this But we don't actually define time frames for how long you can or can't stay in a particular Maturity level of the foundation each project matures at a different rate than others and Their concept of maturity is going to vary. Um, you're not going to have every project look like kubernetes You're not going to have every project look like spiffy. You're not going to have every project look like Falco or Any of the other projects within the ecosystem? They're all kind of unique There are some characteristics that are similar. We can probably pattern them out But what works for one project that comes in? I mean we've had projects apply to sandbox and then say you should really be looking at incubation in three months You're not quite there yet But you're going to be real soon and then we have other ones that are just so new and they don't have a lot of community there They're still exploring their use cases, but they have a good concept for experimentation Those ones might be there for a couple of years and that's okay, too Right, but I guess what I'm trying to say In an indirect way is do we do we want to have like a line where we say You know community growth Not going in the right direction and you know indicators not in the right direction for two animal reviews mean you're automatically going into archive Or something like that, you know to to to kind of um Have a have an automatic filter Is a good question that I don't know that we have an answer I think I think a clear line there is uh Maybe after two years you see no activity on a project, right? So that You see like um no commits No, no activity on github, right? So So that that's a clear line of archival Uh, I mean, I'm just saying two years, but I mean it could be Year and a half or a year depending on consensus, right? But uh, but in terms of like, uh Project being in sandbox, I think we already talked about it that The projects can remain in sandbox indefinitely and they can remain There to experiment as long as they have some sort of activity They're looking for new things trying new ideas Uh, but yeah, so that That's hard to tell like, you know, you know What the line is there, but you know, I think if there's some sort of activity the project should still be in sandbox Yep uh automation of Inactivity detection I think is definitely an area and I believe we there's work being done on that daniel Um or amy I think this was one of the topics that the toc recently talked about is we do have Indicators it's a matter of how do we automate discovery of an active project so that we can engage with them and understand Whether or not they truly are an actor or is something else going on I I don't know Where I saw this it might be something like a link Chris and a check shared but There is a dev stats. There is a dev stats dashboard ordered by inactivity and it's kind of fairly obvious honestly If we apply that to sandbox it would be easy to look at Yep, I agree. I think I've seen the exact same one. I just can't place it right now um, so Automation of discovery of inactive projects definitely needs to happen That's something that we can work on with the foundation to ensure that that is one less thing on our plates Um, so we're not spending cycles going and hunting for those projects If we've already got the data collected we should be able to automate that discovery and engagement um As far as annual reviews go I want to Reduce the amount of work that where we've created as a result of the annual review process without compromising the level of Engagement and exposure that both the tags and the projects get to one another because I think that is incredibly valuable As well as the opportunity for getting new contributors into these tags It's a matter of how do we keep that level of engagement? How do we ensure that there is as much um technical support within the process through automation mechanisms To reduce that level of effort because if a if a project is doing all the right things Having them take time out of their workflow To go and write an annual review isn't necessarily a great use of their time particularly if they're already pursuing incubation How do others feel about that? make sense Okay See some head nods All right, so i'm going to throw out a wild and crazy idea How about we disable the claw monitor bot from requesting annual reviews for projects for right now? It's generating a an excessive amount of work. We currently don't have the contributors to go through everything I think I'd like us to finish up the annual reviews that we currently have Maybe except for projects that are already applying to move levels because we're to see is going to look at them regardless We're going to know what's happening and they've already implied so they're probably in a decent state maybe a few things off And then from there We can figure out how does that annual review process work moving forward? I like the idea of the automation as much as possible I like the idea that we're engaging with projects that actually need our time and attention And allowing us to facilitate that Seeing a lot of plus ones well, I think we need to Revisit what value we think annual reviews actually provide Given we've done them long enough. I feel like we can be objective about That and if there's something that we can establish health and through automation Awesome, but otherwise I think it's It's not providing direct value today necessarily because we're not really doing anything with that information We're not archiving. We're not moving levels. We're not engaging. It's just a report. So I fully support pausing that and figuring out a new path forward and would love to hear from All the tag leadership is to possible different options. We should consider Oh Marina and then Alex Thank you. Um, I will say that I did highlight areas We haven't completely finished it because there are areas of concern But areas of integration and where they can Have touch points with the tag and other projects that they could Integrate with so next steps that I would see after this one I mean They have a lot of helm work, you know talk with the helm community and maybe that's within tag app delivery and I know You know Could use more maintainers or even contributions between both of them And then a couple other areas So I saw it as a good thing going through the review if a lot of it's automated um I mean even if it doesn't happen, but at least some touch points, but I do see Where we can have some value with That project talking to other projects within the ecosystem Um, I just wanted to highlight that Completely agree. I'm wondering if there are elements of that that could also be automated as part of that Alex um Maybe this is a bit dramatic But it kind of depends on what the to see envisage the sandbox for so If we if we want to put the time frame on things And the idea for the sandbox is that eventually they do go to incubation because that's what the foundation is about Um Do we want it to be self selecting by the maintainers? I A review is done at the point where the maintainers say that they're on an incubation track um And if they don't say they're on an incubation track within some given period of time two years three years Whatever that number is then it automatically goes into an archival process And if and if they do say that goes into an incubation track, then it goes for an incubation review and then that kind of resets either resets the clock because something's missing or it actually goes to incubation I think there's a lot of good ideas in that Um, it sounds like I mean just talking about the concept of an incubation review One of the areas that the toc has encountered with projects that do apply once they once they file the pr on our repo is Doing that initial cursory check with the project of are they actually ready? Having the tags step in to be able to do that Would be beneficial and just to ricardo's point the annual review doesn't need to actually still be an annual review It can just be a review And that moving levels function is a great checkpoint to reengage with the project to make sure that they're on the right check And there's still the opportunity for them to request assistance from tags even if they're not there How do I get on the incubation track? Yeah What are their thoughts? I mean like this is excellent feedback That and I think that we have enough information that we can start to take action Um and make this a little bit more meaningful. There's uh, there's still the project Uh, moving levels task force. Um, I think that has very similar concepts to some of what's being discussed here So like mines Um, which is great And I want to be able to capitalize on that back within the task force with the recommendations that are come out of it Okay um, so Leo Yeah, one one thought about the handover process So we have in tag environment environment sustainability We don't have any reviews assigned to us, but we helped tag runtime with two reviews And I think we can also make this process a little bit more slimmer Um, I'm not sure if it is the best idea to hand over the entire review um, I think maybe it's easier to just like Bring up this topic in the tag and say you can Uh, like contribute to tag runtime So they own still the review and the tag chairs Do not lose The entire review because I think it's kind of Like just observing kind of the two reviews that um, we help with it's Putting us a little bit in like a middleman position. It's a little bit strange because um, like tag runtime was kind of Uh, requesting help. I was requesting help. So it was kind of just I think unnecessary that The supporting tag is also hovering over this entire process So I think we can reduce the process load and everything a little bit on this side too Yep, I agree. That's a good call. It also promotes, um, cross-tag collaboration and partnerships as well So getting getting a little bit more exposure to things beyond the particular domain that a contributor might usually operate within So that's it's a great suggestion Anything else? Okay, so I heard definitely turn off the claw monitor bot Okay, like that's like item number one What else do we do with all of the ones kind of like out there because I want to make sure that we're clear on that part I think for Projects that are currently in the middle of an annual review by a tag member If if you're getting responses from those projects Let's close those out and wrap them up because I still think the projects are getting value from that But anything that hasn't been started. I don't think that we should take on right now um, the annual reviews that are currently outstanding and submitted on the toc repo Toc members will need to decide What we want to do with those ones If we just quickly do a cursory review of them and then accept them based off of the discussions from this call And then the expectation is that no projects moving forward would be submitting annual reviews until We figure out how we want to do a review process Which it sounds like we've got some great suggestions here as input to the moving cut levels task force as well as Engagements back with the tags for those projects and integrations amongst projects themselves. I don't want to forget about that one Should I summarize that well for everyone? Did I miss anything alex? Yeah, I'm no it's just gonna say yes Yep, and reducing load for toc liaisons. Yes Okay Amy did you get everything that you needed daniel? Did you get everything that you think so and daniel passing to you? Yeah, I think so. Those are the key ones. So the ones that haven't started how many of those Um, how many of those are open do we know? Uh, use more words and that's if I got the question Oh, sorry So there's closing out and wrapping up the in-flight annual reviews and then there's ones that haven't started already How many are those are they do you just want to close those out? What are you Off the top of my head there's about like six or so that have come in through like july and august that like haven't been like Like touched at all. Um, I think we should probably add those back in and at least do like a A cursory review on the toc side Yep Okay, Emily like Emily does not disagree with me anyone else feel free to be able to disagree I do want to be able to make sure that we're actually recognizing the project's work But I also don't want to be able to give them more all right awesome I really appreciate the tags leadership and the tag contributors Focus on doing this because this is something that I feel like personally has been long overdue is is re-evaluating these annual reviews We we had indicators of the value that they provide But maybe that was not necessarily aligned with the outcome and how we were actually using them So this is excellent information for us to take into consideration moving forward particularly as We have access to so much information about what our projects are doing and how they're doing it And if we have the opportunity to automate that as much as possible, we should be doing so Um, so this has been fabulous and I really appreciate everyone's time no question for the current annual reviews The tags she just continued to finish those up, right? So that's the the goal, right? If you've already got a contributor that is assigned to do an annual review or an annual review that they have already started Let's finish those up if there is an annual review that has not yet been started the toc will take those on Okay And I would imagine the same Would apply to the reviews waiting for a toc liaison to To approve, right? Yeah, the toc liaison will go in and do those approvals Sounds good Do we have a timeline when we're targeting to complete everything? Amy, how's our schedule look for the toc? kind of blocked from honest, um into the year there Okay, that works for me It's an annual review. Let's close them up at the end of the year there. Yep Ricardo Now I just say like people will get more excited after cube con So Well, hopefully we'll have something more to announce about it at cube con So we can the sooner we get these closed out the more we can focus on making these changes effective Awesome. All right. We've got 14 minutes left. Any other questions? Okay community award nominations are open if you are subscribed to the toc mailing list Great, you've already got this message We have a new award though that I want to call out particularly of interest to this group of folks Is the taggy this award is designed to identify a tag contributor Who has gone above and beyond and has broad reach and significant impact in growing the tag ecosystem? So think about who in your in your circle within your tag or even in other tags has been super beneficial and you've seen that impact Nomination links are in the slide deck as well as on the mailing list. And if you're not on the mailing list, I do recommend subscribing Is taggy a single person or is it like chopwood carry order? This year we're doing one because I'm trying not to be able to overload things If we get phenomenal amount of like, you know, uh nominations and all of that we can consider expanding it But for this year, we're gonna do one Sounds good. All right Thanks so much everyone and we'll see you next time Thanks. Bye. Bye. Bye