 Cool. All right, you're good to start. Okay. Welcome back to the second half of the first ever Helm Contributors Summit. Thank you very much for your patience on during the first half as we worked through a couple of technical issues with the platform. I think the nice thing about being first is I could test everything before I got back on. But yeah, I think we're hopefully over the main hurdles with the bugs here. So I am going to kick off this afternoon session with a discussion of Helm Governance. Now, the objective for the second half of this day to cover really what core maintainership in Helm looks like and what project maintainership in Helm looks like. So I happen to be very passionate about the Helm Governance model. It took a lot of work to get it where it is today and we've made some good choices, some bad choices, but at this point, we have a system that we feel like has been serving us quite well. So I'm very excited to get to share that today. After me, we will turn the attention really more toward the coding process of Helm again as Matt Fisher joins us and talks about the Helm improvement proposals, and Adam Reese joins us and talks about the PR reviewing process and what that workflow looks like. So I'm pretty excited about this. This will be a fun afternoon. So let's kick it off and talk about Helm Governance. When I opened things up earlier today, I talked about where we were as a project and gave a quick overview, but in this particular session, I want to return to some of that information and then go a little bit deeper. So the purpose of this session is to provide information on the roles and responsibilities of Helm maintainers, including the project maintainers, remember the ones who are working on particularly Helm or the GitHub actions or so on. But also we'll talk about the security team. We're going to talk about the org maintainers, those who are tasked with making some of the higher order decisions for all of the Helm projects. Actually, we'll talk a little bit about why we belong to CNCF and what belonging to CNCF has provided for us in the past. But our goal here is to help people who aspire to become maintainers, to understand what it means to be a maintainer. So very excited and if this peeks your interest and you decide that becoming a core maintainer is something that interests you, there will be some information at the end about how to take some next steps on that front. So again, this is the same slide I had at first, but I'm going to go into a little more detail. So we talked about CNCF, we talked about the Helm organization and the Helm project, how those were different. We talked about other projects, Scott jumped in and did an overview of all the different projects highlighting some of the ones that I won't talk about too much. But you should have gotten a good high level view at that point of what we were talking about. But I wanted to talk about each of these individual pieces of the greater organization of Helm. So Helm is a CNCF project. Helm started as a project by a small company named Deus. We donated it originally to the Kubernetes project. Then Kubernetes was a foundational project in CNCF, and Helm split out and became a full-fledged member of the Cloud Native Computing Foundation. Now Cloud Native Computing Foundation provides a lot of resources for Helm, and has been just absolutely a bulwark for us as far as providing the kinds of things we need to grow a community, to grow high-quality software, and also to interact with all these other great projects under the CNCF umbrella. For example, they provide some of our technical support and our helpdesk, they host our DNS, they host our mailing lists. Of course, many of you are familiar with the conferences, including this one that fall under the CNCF umbrella. Probably the biggest I assume is KubeCon, but there are many, many others, some of which are, the Helm Summit and Helm Contributor Summit are devoted to Helm, right? But there are many others that we participate in from day-long events to the big mega-events like KubeCon. And those are all organized, resourced, and everything by CNCF. Probably the most important things in my opinion that behoove us as part of CNCF is that CNCF provides a legal status and a legal umbrella. There is somebody to assign the copyright to. There is some structure in place to make sure that when contributors contribute their code, all the ownership and legal stuff is all lined up correctly, because none of us wants to be embroiled in lawsuits about who owns what particular piece of code, right? And CNCF has a very astute set of lawyers and resources on that side that take care of all of that for us. That is great, because I am not a lawyer, nor do I aspire to be one. I think many of us feel that same way. But also, CNCF provides a common code of conduct and a way of evenly and consistently enforcing that. That's very helpful when you're trying to build a community, because as we all know, a handful of bad actors can really, destroy your ability to build anything big and meaningful. But that's kind of where CNCF stops. They don't necessarily, per se, work on the code. There are people who are part of CNCF who contribute. Michelle Norale, who's on the TOC for CNCF, has of course contributed much, much code to Helm, but not as part of her role as a CNCF representative. She's done that as an independent contributor. So that's kind of how CNCF functions for us. So we are part of CNCF. Underneath CNCF, we have this Helm organization, which is a sort of collective group of all the different Helm projects. When you think about it in GitHub land, github.com slash Helm has all of the Helm repository, Helm organization repositories under it. github.com slash Helm Helm is the Helm project. Slash community is the org project slash www. Helm is the documentation project. I think I got that backwards. Helm dash www. But anyway, you saw all of that in Scott's presentation. The org is that sort of umbrella one that goes over all the different projects under github.com slash Helm. We work directly with CNCF on a lot of things. So there are some org maintainers who were nominated from the various Helm projects to serve as kind of special role inside of the Helm org. So I am one of those. Many of you have discussed things with Scott and with Matt Farina and many others who are also org maintainers. We are the ones who kind of officially deal with CNCF. So when CNCF, we have a meeting once every couple of weeks in which we talk about making sure everything is moving smoothly, making sure we're prepared for the next KubeCon. There are a wealth of information about kind of what's going on in the ecosystem around us. And the org maintainers typically participate in that kind of stuff. The org maintainers, other responsibilities include things like helping a new Helm project get started. So if somebody pitches a new idea and says underneath the Helm repository, can we add a, or Helm org, can we add a new repository called say the Helm 2-3 plug-in was one example. And the org maintainers are the ones who are responsible for getting that all set up and making sure that the appropriate people have permissions and that the appropriate code of conduct stuff is put in place and the appropriate legal stuff is put in place. Also anytime an external project becomes part of Helm, right? There are many cases where we've adopted external projects. The maintainers of those projects say, can we become part of the Helm org? And we go through this process of helping them transfer everything appropriately. While we have never actually had to do this, officially we're supposed to resolve disputes that arise between projects inside of the Helm org. I honestly can't think of any case where the Helm org maintainers have had to do this, but at least nominally, should there ever arise a dispute between two projects in Helm, the org maintainers would be the ones to do that. And enforcing the code of conduct is sort of the third one. We have a written policy for how we enforce the code of conduct so that it is done evenly and consistently. We have policies for when we escalate it, we have policies for when we handle it locally. And it honestly least fun part of the job, but it is something that we felt we needed procedure for. So the org maintainers, you're probably thinking, okay, this sounds like a lot of administrative stuff. And that is true. The org maintainers handle a lot of the administrative stuff for all the different Helm projects. So I jokingly call it sort of the boring stuff, right? Because the project maintainers are the ones who really do all of the work directly with the code and the documentation and things like that. So we'll spend a lot of time talking about project maintainers, but I did want to return very quickly and talk about the security team. So in any project that has a substantial amount of traffic, a good number of users, a flourishing community and a popular piece of software, it's a very good idea to also have a security team and a security mailing list. So if you ever should happen to find a security bug in Helm, you can go to the Helm repose pretty much any of them and click on the security documents and you will find instructions on how to confidentially report those security issues to the security team. The idea is that you can disclose it to a small group of people who can then quickly scramble and try and fix it and close the hole. And then we will disclose it and credit the original finder for their diligent work and things like that. Fortunately, it is a rarely used process. Unfortunately, we have definitely used it as many of you are aware. We also handle stuff like when the security audits happen every couple of years via CNCF. We are the ones who review the security audit reports. We work with the security auditors and help them discover things and then help them do their job and then receive the report and stuff like that. And oftentimes the security team is actually the team that goes and fixes the code. GitHub provides some excellent tools for this. So here's kind of what happens, right? New email comes in on the security mailing list. There are about eight of us on the list. Typically the first one to read the email starts DMing everybody else because we really want to get going as soon as somebody notices this. And then the people who seem to have the most in common with the particular bug being reported are the ones who typically sort of try and get together right away virtually and figure out what we're gonna do to address the problem. So if it comes in as a monocular bug, the team that's working on monocular is typically the ones who will start working together on a security fix. If it's the Helm-Helm repository, then typically the part of the security team that works on Helm will be the ones who take that on. We then use GitHub to clone a branch privately and work on it together, peer review it together. On occasion, we do have to pull in non-security team members to help us evaluate potential solutions and figure out the impact of things. But we get it all fixed up. We write a CVE, vulnerability disclosure, and a CVE, prepare everything and get all of this lined up and then we scramble to do a release and we build the binaries, post them, release the release notes, release the CVE which gets reviewed by GitHub itself and then gets published and then we start trying to broadcast everyone. This is what's going on. So security team, because it has a high bar of trust, typically members of the security team are project maintainers who have been here for a while, at least, right? And who have some interest and some deeper knowledge of how security works. And then the CVEs, I think I can answer this question here that just came up in chat. The CVEs get published via GitHub and GitHub distributes them to the common CVE databases. All of that now is handled by GitHub. We used to have to do all of that ourselves. So we love the new GitHub features for security. It has really, really cut the stress of cutting security releases down. Okay, so that's a security team. Project maintainers really, to be honest, are the ones who do the most amount of essential work for each of the helm projects. They're the ones that make helm and all of the projects work. And I could never sing their praises enough. What a project maintainer does is say, okay, for my particular project, I am happy to take on a number of responsibilities. Among these responsibilities are helping to manage the issue queues. So that as new issues come in, at least they get some response within a timely manner. Reviewing PRs is a huge part of the project maintainer workload. As community members submit pull requests, the project maintainers attempt to kind of divvy these up and manage them, which means commenting on them, providing helpful feedback on occasion, explaining why we can't do something on occasion, suggesting what needs to change before we can accept it. Also, flagging these things for additional security reviews. There's all kinds of work involved in this. Adam's gonna cover this in about an hour. Project maintainers also make many of the architectural decisions. Helm is large, helm is complex, and it can sometimes be tricky to make sure that things are lined up appropriately. And project maintainers are the ones who are responsible for saying, okay, this PR came in and it changes this particular thing. Let's step back and take a bird's-eye view of this and see if this is going to work well. The project maintainers also maintain the roadmap for when releases will be cut, what features will make it onto those releases, and then maintainers themselves also cut the releases. Some of the maintainers like Matt Fisher, who's gonna talk next, absolutely expert in cutting releases, others of us like me, have cut several releases and pretty much consistently done it wrong every time. So some of us are better than others, but it's part of what we have to do as we manage helm. But I did wanna put this last part at the end. Optionally, maintainers write code. I think it falls, many of us do, but it is a misconception that project maintainers are the only ones who write helm code. Occasionally, we will get people writing in and saying, when are you going to write this thing for me? And it's hard to explain to people that maintainers are very, very busy, often just going through that checklist there. The idea with helm is that the community itself is largely of the changes to the code. And so we rely very heavily on pull requests that come in from around the community. Again, project maintainers do many of them, not all of them, many of them do write code for helm here and there, particularly during helmet three, lots and lots of core maintainers were involved in writing code. But on a day-to-day basis, probably very few of us have the time to actually write up brand new PRs. We're usually spending that time reviewing other people's PRs, issues, hips, and so on. Also another misconception I wanted to dispense with was that project maintainers have to work for a particular company or in a particular industry or so on. You don't have to be from IBM or Microsoft. You don't have to be from a technology company. You don't even actually have to work for anybody to become a project maintainer. It so happens that here and there, the people who have expressed interest have worked for places like Google and Microsoft and IBM. But really there's no criterion anywhere in the project maintainership guidelines that has anything to do with where you work. It has far more to do with who you are and how you wanna help. And then finally, not all project maintainers are even software developers. Many of our project maintainers were brought in originally not to even do PRs or review PRs, but to help out in community building or to help out in documentation or to help out on Slack and things like that. And those are the people all play very, very valuable roles. It is true that some people who review PRs need to understand the mechanics of Go or of the Helm template language or charts or whatever in order to do those PRs. However, that is not the only thing that we need to do. And so we need maintainers along a variety of different axes. So becoming a project maintainer. We have been changing around the process for becoming a project maintainer. When we were a small project, it was fairly easy for existing project maintainers to just pick other project maintainers from the community and say, you know, this person's on Slack a lot. So we should see if they're interested in becoming a project maintainer. But there's just so much activity around Helm that oftentimes we are no longer the best judges of what people there have a good, deep knowledge of Helm. So we are constantly always looking for reliable project maintainers. And here's kind of the process that we've come to more recently. If you're interested, you can kind of nominate yourself. If you have somebody that you think would be a great nominator, you can nominate, sorry, great maintainer, you can nominate that person. But the people that we're looking for should have at least some contributions to Helm to show. That might be, I did some documentation, that might be I've been very helpful in the issue queue or I've opened some PRs or something. But we need some information to be able to make judicious decisions about who will be a reliable project maintainer. From there, you can find a maintainer and let them know that you're interested. Simplest way to do it is say, hey, I am so-and-so, I'm interested in being a maintainer of this particular project and I would like to be doing something, what you'd like to do. So I'd like to work on the docs team helping with coordinating translations. I'd like to work on the Helm Helm team reviewing PRs and answering issues, things like that. And you should be prepared to commit some time because project maintainers do have to commit a lot of time but that's really the gist of the qualifications. From there, people who are nominated, from there, you'll submit this to someone they will nominate you if they believe that you're a good fit and then the nominator will bring this up with the rest of the project maintainers. And there's sort of a private process. We don't do this in public. We never want to have any of these kinds of things aired publicly for people's privacy sake and so on. So we discuss it privately and say so-and-so has nominated this person to do this particular role. This person was well-known for doing this in the community, well-known for doing that in the community but also you might be surprised to learn that they've also done this or something like that. We discussed through this for often a period of two weeks and then at the end of that a private vote is held where only project maintainers see who's being nominated and only project maintainers vote on that when the person is successful they begin the onboarding process. And there's a document in the community repo that basically describes the onboarding process. So that's kind of how it works behind the scenes. People might be surprised to learn and I wanted to say it out loud because if you're interested in being a maintainer this I think is both encouraging and also informative, right? The big stuff we look for is honestly more about maintainers ability to be helpful to the people in the community. So our big checklist is are they friendly and approachable, right? Do they, when they answer questions in the issue queue do they do it with respect and kindness and gentleness because those are all good things, good attributes that we look for. Can they commit time? We have in the past had people say, hey, I'd like to be a maintainer but I probably won't be able to do anything except one hour or maybe two hours a week. And we look and say, but we have a limited number of people we can make maintainers we're gonna have to pick somebody who maybe has three or four hours a week, right? So time commitment is a big deal for us in that evaluation phase. And then we do wanna make sure that we match the right skills. If we need technical maintainers we're gonna focus a little on can they resolve PRs? Can they help answer debug issues with how the template rendering process is going? In other cases we might say, can they, on other documentation PRs do they have fairly good grammar and an ability to convey ideas clearly, stuff like that, right? And then there's that social aspect to it. Are they good at discussing issues with people? Are they good at asking for feedback? Are they good at successfully deferring someone when they say, hey, I really need this feature in helm three and they say, ah, can't put that in helm three because it would break a whole bunch of things for a whole bunch of people. Those are the kinds of skills that we look for. And then finally, the last kind of bit before we vote is saying, okay, will this potential maintainer fit in in one of our needs areas, right? We have in the very distant past now had a case where the right person came along just at the wrong time and we couldn't have them be a maintainer because their skillset was not one that fit the needs at the time. Later on that person did become a maintainer when we had an opening in another area. So sometimes it's just a matter of fit. However, please don't let that stop you from applying because that's the kind of thing where we can say, well, right now we don't need any X but we will keep you in mind or are you interested in Y, that kind of thing. Finally, I debated long and hard whether I wanted to talk about this but decided I should. There are things that we look for that are what we call red flags, reasons why we would not accept a nomination for someone and take them on to voting, right? And all of these are kinds of the things you would expect, right? People who are hostile to others, people who have violated the code of conduct in the past, people who are just show up and say, I've never really worked on Helm but I want to be a core maintainer. Those are all big warning signs for us. Also people who show up and say, I just want to be a core maintainer so I can get my particular feature in and then I'm gonna bail back out again. That is a warning sign for us primarily because that person we don't feel is a reflection of the best interests of the community. That's a person who reflects really their own interests or the interests of their employer or whatever else. So those are kind of the big things we look for. Again, not a big long list and a fairly common sensical list but I feel like after long debate over this slide I decided I really wanted to put this in just so people understand what we're looking for on that side. So that's kind of the organization as a whole. And in review, we had the CNCF which we interact with quite a bit and which is an excellent organization. We've got the Helm org beneath that which sort of tries to work on all the cross cutting things that happen between different projects, different Helm projects, Monocular, the GitHub Actions repo, the testing tools, the Helm client itself and the org kind of tries to hold all those together particularly from an administrative sense. We talked a little about the security team whose job is to respond to security issues in the Helm ecosystem with responsibility, privacy, sensitivity and promptness, definitely promptness. And then finally, we went into a little more detail about how project maintainers work and what the responsibilities are, what we look for, how you if you're interested can become one of those. So I'm gonna pass it on now to Matt Fisher who's gonna talk about the Helm improvement proposal process and how we use that to build out new features in Helm and why we choose to do so and why we're really happy with that process. Thank you very much. I'm gonna hand it over to Matt now. Go ahead Matt. It looks like my webcam video is not working but I think I can still share my screen. So let's give that an attempt. Can you see my presentation at the moment? Can you see it now with full screen? Yep. Sweet, okay. Awesome. So hi everyone. I'm here to talk about the Helm improvement proposal process. Hips don't lie. Shakira wrote this song specifically for him or for Helm. So just letting you know that. So today we're basically gonna talk about the Helm project. We're not gonna talk about the Helm Helm repository though that is quite relevant here. We're talking about the whole organization. But so when we talk about the Helm project and where we started from, we started from a very small repository for Helm version one. It was known back then as Helm Classic and we had a very modest amount of issues, people who were coming in and asking for features, who were interested in contributing to the project and we were able to handle the oncoming queue quite reasonably. Small team, working on a small project and so it was easy and hunky dory to handle. We come nowadays to Helm the project and the vast amount of success that we've seen inside of the project, we have a lot of people who are interested actors who want to either file an issue who are working on a feature that they want to have merged into Helm or they have like a support question or something along those lines. On top of that we also have open pull requests for people who have actually opened up and written a feature and want to get it merged into Helm. So you can see here, we had only like 46 issues on the first one for Helm Classic. When we're in Helm Helm now, we're looking at something kind of insane which is 316 open issues currently, about 120 open pull requests and we have about the same amount of maintainers that we did before with people kind of moving in and out. And so, yay. You're looking at this and saying how are we possibly going to maintain this entire project? So the challenge that I wanted to propose to everyone and the challenge that we were out or thought out to achieve was as maintainers how do we best spend our time here? When we're looking at the issue queue, it's not going to be easy to spend a couple of hours going through all 300 issues and then say, okay, I'm done for the day. We need to actually focus where to best spend our time, where is the high quality content that we can go and merge and get ready to review. So today I'm focusing particularly on proposals. There's also interest in like support questions and all that kind of stuff but I just want to focus today on proposals that are being proposed to the Helm-Helm CLI but also to any other Helm project. So inside of Helm, there are about three different types of proposals. This is kind of experience from looking at the different proposals over time. This is kind of categorizing the majority of them. I know a couple of them are a little bit different but for the most part, when we look at them, we kind of categorize them into three different places. There's the IETF RFC or the Internet Engineering Task Force. They have an RFC process that has been very well thought out. And when you talk about the proposals that are in the IETF, they are very, very well thought out by the people who are offering those proposals. They provide very clear, concise use cases. They justify their need and usefulness to the general broader community and the scope is very clearly defined. This is where something in the IETF starts and this is where it finishes. Same thing for Helm. We find a lot of proposals where the scope, it's justified and everything else is all there. It's very clearly this is the use case that I want to use it for and this is where I want it to stop. It's not going to handle anything further than that. So that's great for handling scope as well backwards compatibility can be addressed or at least carefully considered. So for example, things like the DNS, I think it was RFC 1123, which was the DNS sub-domain name proposal. And so there's that one and then there's a couple of other RFCs that expand upon that. But in the process of writing out that new RFC, they talk about prior RFCs and they also talk about backwards compatibility for the client, which is fantastic for people who are writing new clients against that RFC. They want to understand where's the backwards compatibility concerns. If I implement this RFC or this Helm improvement proposal, what happens if I choose to implement this? Do I break compatibility for certain people? The other thing as well is that it can be very well tested. It can be tested by the community as in has been battle hardened and people have used it in the industry for many, many years and they come out with it or it has a very robust set of test infrastructure to ensure that when it's been implemented, it has been implemented according to the specification. So two different aspects to be shown there for well testing. And then many use cases are considered in its design. It's not just I need to implement this and get this done. It could be figuring out, for example, in Helm we have the output format flag. One example could be I just want to write an output formatter for YAML. But if an RFC shows I want to write an extensible output formatter that provides the idea that many use cases are being considered. It's considering that I need to implement a particular output format, but others might want to do something else. So I'll leave the door open for them to consider writing their own output formatter in the future. So many, many use cases can be considered in that design. Then we have another proposal out there that's kind of called the solution here. When we talk about the solution here, we're talking about cases where or proposals that provide very, very clear use cases. And it may solve one user's problem, but it may not solve it for everyone else. What I mean by that in particular is someone may come up with an issue or I want to implement our very particular feature set that to be contributed to either Helm or to another project, but it doesn't necessarily consider other people's use cases in that time. So for example, like if we did a hard coded output like dash dash output JSON flag, that is a very concrete user problem, but it does not consider how it can be extended, how it can be used by other people. So it's just a solution to the problem. It can be very well intentioned. They are trying to solve a problem and they're trying to see how they can help out others as well. But then also there's some other cases in the proposal which they may or may not include tests. They may just be trying to fix a problem for them. And so in that case, they're just implementing it and then they're just sending it up on GitHub to say, here's the PR that actually implements this. I haven't tested it other than actually it works on my machine. The other part as well is that if it's a hack or a fix, it may actually fix a problem for them, but it doesn't necessarily conform to the same things that we have inside of Helm, which is the backwards compatibility guarantees. It's very, very important for users, especially with the amount of down the limit that we get per month for Helm, that we need to keep backwards compatibility in mind when we're implementing new features. So if we go and implement everything in the world, but we break backwards compatibility, people who are upgrading from one version to the next are going to have a very hard time determining if the next version will break for them, as well as they don't have that very dependent idea of if it works for the first time, I can upgrade to the next version and it'll continue to work. So we need to make those concerns or we need to incorporate those concerns into the proposal. The last one that I want to mention here is called the paper napkin. These proposals are literally written on like a paper napkin and just thrown up there. It's kind of like a thought process, more so than actually thinking through the entire design. So what that means is that I want to implement something, but I haven't really considered Helm's overall architecture design. For example, Helm is a package manager, but if I want to write a particular feature that does not fit into the mindset of a package manager, I just want to throw it into Helm. That's kind of like I'm not putting any thought into the architecture and how it fits into everything. So another part of these like paper napkin proposals can be that the scope is usually undefined. It doesn't have a very clear start and stop. It can be a very quirky design. It can be often very buggy. If there is any code that has been written at that time, it usually doesn't have any test infrastructure in place. I found a couple of times a proposal will be implemented, a PR will be written, and then I go and actually test it based on what documentation has been provided in the ticket usually. And I'll find that it doesn't even work. So it often also can fail to consider other design criteria. Again, about that like hard coded output format or flag, but again, it's expanding upon that and saying it's not even considering the architecture overall. And then like I said, in some cases it doesn't solve the issue at all. It's just a bunch of code. I packed it up and threw it up on there and maybe I may or may not have tested it, but I just wanted to do that. So at the end of the day, when we're talking about us as core maintainers and we're looking at all of these proposals, we really want to have all of these proposals and promote them to the next level. We want more RFCs. And the right way to do this I think is to guide them in the right direction to start thinking about those problems, where we come from in like writing up a proposal and coming up a feature. I wanna be able to answer those questions up ahead so then they can get them thinking about those problems and we can have more higher quality proposals. So this is where the helmet improvement proposal process began. It's also known as a hip as I short-formed it and mentioned a couple of times. This process is heavily inspired by the Python Enhancement Proposal or the PEP. If you look at the Python Southport Foundation and look at, I think it's github.com slash Python slash PEPs, P-E-P-S. You can find every single PEP that's ever been written out in text format and it's an archive there. I believe it might also be the living repository where all of that is. But they've already trodden this path before and we just wanted to take that exact same process that's in there and a couple more tweets here and there because like they have like a PEP review committee. They have a couple of other resources available to them that we don't necessarily have. And then they have some other processes like going through the Python nuts. No, it's not Python nuts, it's Goduts. They go through the Python Dev mailing list and they go through the user mailing list. So we make suggestions here and there on how to help your proposal go through but it's kind of localized towards Helm which is the Helm Slack community channel. The Zoom meeting calls that go on for the Dev call but if you were also on another project it can go to their community calls and then to open ticket or opening up a hip itself. So the different types of improvement proposals there's three different types out there. And again, this is based on the Python enhancement proposal, but there's three. There's the feature which I've talked about before. This describes the new feature implementation in any of the Helm projects out there. One example would be Hip6 which talks about the experimental OCI support and it talks about backwards compatibility. It talks about the feature enhancements needed for that project as well as any backwards compatibility concerns how we plan to migrate over customers from chart API to OCI, stuff like that. There's also the informational hip which describes a design issue or provides general guidelines on a particular issue. So for example, Hip11 talks about cost or custom resource definitions and how it's handled in Helm3. It's always been a bit of a tricky subject when we talk about upgrading those resources because when you upgrade a CRD it results in a cascading failure or not a cascading failure, a cascading delete of all custom resources when you delete the CRD. So we need to keep those in mind when we're talking about it but a hip is a great way to talk about how CRDs are handled in a particular way in Helm. So it gives the community a actual reference idea of why something's implemented a certain way. We describe open design issues inside of Helm like limitations that we might have, things like that. And then there's also the process hip. These describe processes surrounding the actual project. So things like how products join the Helm organization. There's another proposal for talking about how people become emeritus maintainers I believe or how to be onboarded as an actual Helm maintainer. So those are also Helm improvement proposals. And those are living documents that can be changed until they're finalized, which I'll talk about in a second. So the very first hip is hip one which talks about how to actually write that Helm improvement proposal. And it's quite a mini document. It also talks about how it's reviewed and how the process moves along but the main parts of a hip is where it's described inside this document. So there's the preamble, which is kind of like a YAML front matter on top of the document that says what's the proposal number which when it's eventually assigned the title of the hip, the author of the hip, it could be information like if this hip supersedes a previous hip that was written by someone else the current status of it, things of that nature. More about metadata about the actual document itself than actual human readable document information. The abstract, so it gives a very short description of proposal kind of like you're sitting in an elevator with someone else, how would you describe what you're trying to achieve here? The specification, so the actual technical documentation, how it's going to be implemented. So for example, if it's a process hip it's actually talking about the process of how a project will join the Helm organization or it might talk about a very technical deep dive in how a particular feature is implemented inside of Helm or another project. The motivation, this is actually kind of important because we want to be aware of what was the motivation behind a particular feature. Is this someone who is interested in their own ideas and want to motivate themselves to write this own feature? Are they someone who's in the research field and they're interested in testing out an idea or a theory? Or is this someone who's kind of committed this through a committee who's been working on this in a working group and wants to consider this proposal as like a general feature for the entire home community rationale. So another thing, there's why should we consider it but then there's also the motivation behind the design. So things like why did you choose certain design criteria over others? There's the backwards compatibility as well so you can describe any incompatibilities and their severity. So for the OCI support one, we have to talk about income center, backwards compatibility, how we migrate users from the chart repository API, if at all, how could that be achieved? Things like of that nature, how do we address dependency resolution and things like that? Because there are some changes there that could affect other components inside of Helm. And then there's the actual reference implementation. So this could just be a simple link to actual code as in a pull request that's open on a project. It could also just be like some small sample code. So if you are writing improvement proposal on say a new library that you'd want to implement, a reference implementation could be like a code sample of how to use the library along with a link to the actual code itself. So that could be quite useful for like if you're trying to donate a project to the CNCF for example, even though that doesn't require being a hit but that's just one example. So who can approve a Helm improvement proposal? So depending on the type of proposal is the whatever is the right person to get involved. So for feature proposals, generally speaking, the best person fit for the job is someone who's a project maintainer on the particular project that he's trying to address. If it's a hip that encompasses multiple different projects then maybe you need to get other governing bodies like the Helm project, chart museum to those project maintainers to talk together to talk about this proposal. But that's generally who would like to approve that hip because it would be nice for the people who write and maintain the code to actually go and review and accept the proposal. There's also the process proposals. So if it's a process that's related to a particular project, for example, like that CRD handling hip, then that will be a project maintainer from the Helm project. If it's related to governance, governance usually falls under the entire organization because it reaches to every single Helm project community. And so that'll be under the org maintainers that will be reviewing those and approving those. Then informational proposals, these are just kind of good to know proposals more or less. So it's discretionary. It's usually project maintainers who will go and review those. They could be org maintainers depending on the kind of information that's being provided. So things like that. So you actually wrote a hip. It was assigned a number and it's been approved. So now what do you actually do? And Adam's gonna drop into this a little bit further, but I want to talk about actually doing the thing. So when we're talking about the process and the proposal, when it's been assigned a number, it is now marked as a draft. What that means is it's now a working living document from that point. The reference implementation still needs to be finalized at this point because it hasn't been reviewed and accepted or at some point, it could not even be implemented. It's just an idea at this point. So the hip can still be changed at any point during this time. It is still marked as a draft. It does not mean it is a final document that everyone should subscribe to. It's just more of a reference implementation and a living document that people can change as the design is iterated. There's also provisional acceptance. Provisional acceptance means that it has been accepted for inclusion as a hip or proposal, but it needs additional user feedback before it's actually finally approved. And what this means for project maintainers and people who are maintaining the hips, the actual hip can still be rejected and marked as rejected even after changes have been made to the project. What this means is if you have a proposal that only solves a problem for you and it turns out to be a really bad idea or it turns out that the project has decided to move in a different direction or there's another way of implementing it while also handling other use cases that eventually that proposal can either be rejected and then a new proposal can be re-included as a new draft to implement or to supersede that proposal. But basically this gives the project maintainers a tool for saying, yes, we'll accept this proposal. There may be questions about the design. So this is probably a good time for you to go and implement it and then ask for more feedback to see if this design fits with people or if you need to go back and make more changes. And then so there's the final status. So this is when the actual reference limitation has been merged and pushed into a release. If it's just an informational hip, it's considered final after it's been written. So because it's just more of like a nice to know at this point, it's already written down. You can write a new hip, if new information comes up, things like that. For process hips, if it's like a governance change, then you'll probably want to have a vote from the org maintainers just to make sure that it works in alignment. If it's a project process, then you'll probably want just like a lazy consensus vote or a small simple majority vote. And so the governance kind of talks about that as well. There's a couple of other states as well that your proposal can happen to go fall under. One of them is deferred. This can happen if a hip has been accepted and moved into a draft state, but there has been no progress as in like the user has kind of gone AFK at this point or away from the keyboard. It is not planning on implementing it anytime soon, change jobs, things like that, that nature. It's again, it's all about humans. So if we find ourselves in a state where a hip, we're trying to move it towards something, but we don't have anyone who's working on it, then we'll move it into a deferred state and someone can take it back on to bring it back into draft and then reject it. So if it turns out to be a bad idea for any, I prefer any idea of whatever it happens to be, it's still very good to keep it around for historical reference and future learning for other users. This is a good idea for any project, but then it also gives you a reference point to say, hey, we've tried these ideas in the past, but this is the problems that we saw in the first design. So let's see if the next proposal addresses these issues. So the advantages of this process, it gives us very, very high quality proposals. People who have to write these proposals have to address all the required sections of the hip. What this means is that this lets them think about the problem at hand as well as any questions that may come up throughout the proposal process. If they just go and do the paper napkin proposal like I was talking about before, then they can just be outright rejected because there's not enough thought being thought into the proposal that we can't reasonably spend our time well to get this into the right shape. So it really helps us out as core maintainers to make sure that we are spending our time right. It also gives us very clear transparent guidelines for all core maintainers and for all authors on how to get your proper proposal approved, how to get it reviewed, how to get it merged and go into a final state. It's a very clear process that's all in documentation. There's also, it also promotes fewer discussions to happen behind closed doors. We've had this feedback a couple of times where features inside of the Helm project in particular will be implemented without community consent or not a community consent but without a consultation from the community or it'll be implemented in some way and it doesn't address a particular use case and someone will come up a year later or a couple of months later and say, hey, I wanted to go and this doesn't address my use case so I need to go and make some changes. So, and where was that discussion happening? So that really opens up that kind of open door philosophy where everyone can contribute. It's there, you can comment on the hips, you can change and make proposals and whatnot. And again, like I said, Shakira wrote an actual song for this proposal process. So where hips don't make sense, this is quite as important as actually writing a hip because we don't want to tell people it's not okay to write an actual feature if it's just a very small thing. So if it's a very small PR that require very little design work say you're implementing a one liner fix that's just going to change the output of an error or something like that. We can usually just review those without any other concern. Bug fixes, if it doesn't work according to what it should then it's a bug fix. It doesn't really need a proposal. Then features that can also be written as a plugin. So this is again, I'm going to talking about my special use case, my solution here. It's best to solve one user's problem if you want to write it as a plugin, go right ahead, just write a plugin. And once there's enough community engagement and we see more interest in that project, if you want to incorporate it into the project then we can write a hip on how it would be merged into core how backers could be able to address how the architecture can be addressed, things like that. So yeah. So these are a couple of references here. There's also other processes that other projects have used as well like the Kubernetes project uses the cap process. It's also very similar to Helms but when writing out the hip proposal process it was more based on the Python pet process. But then there's also, of course, the IETFRFC which is always fun to read in full text documents. So there's some references there if you want to I'll share the slides afterwards in the Slack channel. That's it. So with that, I will pass off over to Adam. So just so you know Adam, I just made you a presenter I believe. So I'm going to switch the center mode on. Okay, can you hear it? Yeah, I can hear you. Okay, there we go. Can you guys see my screen? We just see your pretty face, Adam. I think if you want to close out the tab and then reopen it or refresh it, it could be like related to permissions on the browser. And with that, he's gone. The alternative is if you want to share in the PM your slides, if you have anything, you can also present from my computer and you can dictate. That's going. I did want to ask you a quick question about a great presentation. And I'd like that you covered a few things at the end that weren't, hey, you don't need a hip for every single one line PR. But I think there is, there's a vast gap between I want to rearchitect everything about CRDs and I want to fix one line. Can you give us a tiny bit more insight into how people, what process people should use or how they should approach trying to figure out which side of the hip or no hip they lie on before they write a big pull request? Yeah, I think in general, if you're thinking in ideas of backwards compatibility, if you are writing a feature or a bug fix or something that might affect more than one person, for example, if it's just an error change or if it's just the change of like one typo, then those are quick, easy things. But if it starts getting any bigger than that, that's when it starts, you have to start questioning whether an improvement proposal needs to be implemented. And so I think the easiest idea would be if you want to open up an issue, that's a good first place to start asking that question. And then we can say, oh, this might need some further discussion, so an improvement proposal is a great way to start. Or we can just say, it's just a small one-liner fix or if it's a simple fix or the work from another hip covers this, like for example, the output format flags. If it's just like a YAML format or that you want to implement, it's pretty painless to go and implement that. It's more of the things of you're considering backwards compatibility and wanting to address that. All right, wonderful. Thank you. And it looks like we're about ready to hand over to Adam. All right, sorry about that. Yeah, so I'm going to be covering PR reviews. Matt Butcher covered who the players are. And then Matt Fisher covered the enhancement proposals, the ideas that are coming in. So what I'm going to focus on is looking at the actual code and what it takes to get code put in and actually merged. So first I wanted to thank Karen, Bridget and Matt for putting this on. Thank you for doing this. I know it's a challenge doing this virtually. So first off with pull request reviews, we want to be helpful and constructive. Really, what's happening here? Sorry, my screen chart's going weird. Let me just do this. Can you see it okay? Still? Yes. Anyways, okay, let me keep it on. We want to be helpful and constructive. Really, we want people to, we want to encourage people to contribute and we want it to be a good experience. So we're going to try to help and nurture people along the way to make it as easy as possible to try to encourage people to come back. So this was covered a little bit in the hit proposal. This is kind of the first stock gap when reviewing a pull request. Does it fit with Helm's intent? You know, and that's to make it easy to package, share, deploy and manage Kubernetes applications. Our goal is to be the package manager for Kubernetes. So if it doesn't really fit along that line, then we're probably going to push back on it and evaluate, does it really fit into Helm? The other thing we're going to look at is, does it solve real problems for real users? You know, if somebody comes in looking to implement a feature that is only for their workflow, we want to try to avoid that. We want to look at how it's going to benefit the entire community, be something helpful. If it's just a single feature for a one-off, we don't necessarily want to take that on as tech debt and have to maintain it moving forward. Probably the biggest thing we look for is backwards compatibility. So that's been covered a little bit as well, but we really tried not to break things for our users. Helm is used in production, and we want to make sure that when new features go in or bug fixes go in, that stuff is still going to work as expected for users. So the things we look at for that is APIs. What I mean by APIs is anything public facing from the code. So that includes function signatures. It includes anything exported in the Go code. In Helm 2, we have GRPC and API exposed, so it includes that as well. Also input and output. Oh, one other thing for APIs is Go interfaces. We want to make sure those don't change as well. Input and output. So this is any data going in or out of Helm. When people are scripting Helm, we want to make sure that the output is what's expected, that scripts don't break for users. So that includes any sort of output like YAML output, JSON output, even error messages that are returned. Another thing we don't want to change is any default configuration. So if configuration is not set for users, then Helm is going to behave exactly as it was before. Another thing is chart format. So this includes anything in the chart. The chart YAML, how requirements are defined, how dependencies are defined, and then also anything on the CLI. That's part of the API as well, because it's the interface to use Helm. So we want to make sure that sub-command names are not changed or modified, as well as flags. Anything that is added should be a new flag or a new sub-command. We don't want to change anything that's existing within Helm. Another thing is backwards compatibility with Kubernetes. We want to maintain whatever the current version of Kubernetes is and support the previous two versions. We will definitely try to not break backwards compatibility further than that, and we're going to try to help out users as best we can to not break things there. But that's our stated goal, is the current version and two prior to that. The other thing with backwards compatibility is behavior. So that goes along with any flags or functions, really any of the things previously stated. We want to make sure that the behavior is the same for new versions going out. Another thing we're going to look at is coding conventions. This we're not going to push back too hard on. We try to follow like standards and best practices, but we're going to look at a number of factors with this. We really just want to have a streamlined process as possible to get your code merged in. So with other maintainers that we know and trust, we'll maybe push back harder on this to people that contribute more to follow these sort of guidelines. But with new contributors, we're going to try to make it as easy as possible. So a few things we use as references are effective go from the official go documentation. And then there's also a page on the go wiki for the go code review comments. We also look at code spell and formatting. So pretty much all the formatting and linting rules are just built into the CI pipeline. So if it's green, you're good to go on there. So formatting is basically just go fumped and linting rules are going to be dead code. Spell checks and that's all built in. So you can run that locally after you write your code using make test style. Another thing is testing. So if there's substantial code that changes any sort of or adds significant behavior, we're going to require that unit tests be written with that. You know, our aim is for production quality. So we want to make sure that stuff is well tested. And when you're submitting a feature, unit tests are really the only way to ensure that feature is going to persist. We're making changes to other parts of code. That's really how we know if anything, if any previous features are going to break. You know, if we don't have that sort of feedback when merging other pull requests, then there's no real way to know if your feature that you have submit is going to be affected by that. So that can be run with make test. Make test is also going to run the lint checks as well. Also documentation. So documentation is required for all new features. You know, and this could be if it's a flag or a new sub command, that would be adding documentation to the help text. Let's print it out on the command line. If it's new exported functions for APIs in the code, then API comment blocks. So it shows up on go docs. Also if there's internal functions that are more complicated, then you might ask for inline comments there as well. Also a lot of our documentation is on Helm SH. So that's in a separate repo under Helm WWUW. And that's higher level documentation. And it covers quite a bit. So if there's any sort of nuances, that would be the place to record that. That'd be the documentation to help understanding of behavior. All of us have different way of reviewing the actual code. I'll just talk about my process a little bit. So what I usually do if it's made it through all those initial checks, I will actually check out the code locally and build it and validate the behavior. So sometimes that might mean asking for a specific test scenario. So we can actually view the expected behavior by running commands manually against the test server. Another thing I will do is check out just the unit tests and make sure that a failure occurs. That way it's known that the tests are actually testing for the new behavior. And then before a pull request gets merged, we wanna make sure that all tests are passing so those tests include running through the unit tests, the linter tests as well as checks for that the commits are signed. So we have a developer certificate of origin. What that is is you're basically signing off that you are who you say you are and that you are allowed to make the changes and contribute them. So that can be added to your commits by, when you run your commit command, you can just add sign off and that's gonna add text at the bottom just saying that you have signed it off. And there is a CI check that we'll check to make sure that's done. Also for PRs, at least one LGTM has to be on the pull request by a maintainer. If the PR is marked as large, then two LGTMs are required. We have a bot in GitHub that calculates the number of lines changed and it will automatically sign a size labeled to that. Sometimes if the feature is more complicated, we'll also ask another maintainer to take a look at it and give another LGTM to that. For the LGTMs, we use the GitHub interface for approving. So when will it be in a release? So patch releases, which are bug fixes, we typically do monthly. There are some special cases where we'll get them out quicker. In the case of security releases, those will kind of be on their own cycle and we'll just get those out as soon as possible. Minor releases are for new features. Those will come out every three to four months and major releases are anytime we have a release that will break changes. That'll be a lot less frequent. So that would be switching from Helm two to Helm three. So our next major release will be Helm four and how we do branches. So the main branch should always be buildable and usable, any sort of bug fixes should be fixed on main to make sure that the main branch can be checked out and used at any point. Then for feature release branches, we will create a new branch for a feature branch. So that would be like release 3.4, then the next one would be 3.5. And then tags will be cut from that branch for patch releases. So fixes, like I said before, they will be fixed on main and then to cut a patch release, what we're gonna do is cherry pick that fix onto the release branch and then cut a new tag from that branch. Most of the time we can get away with a cherry pick, but sometimes there is conflicts doing it that way and where we might have to make changes. And then the changes that get made kick off a whole another review cycle for that. And we'll open up pull requests specific to that branch. The last step is writing release notes and generating a change log. So change logs can be auto-generated. There's a make target for that that will look at the difference between two releases and spit out a formatted log of the commits that were added. For the release notes, we like to write those out. The change log is usually included in that, but we'll break out what the important changes are and rewrite some of the commit messages to be a bit more human friendly. Okay, and I finished way early. Thanks for having me. Here's a bunch of links. A lot of these have been shared already, but everything I covered in here is documented and goes into much more detail in the documentation. So we have a contributing doc in the Helm repo as well as on the website we have a developers page which is really getting your environment up and running, what you have to do to get ready to hack on some code. Then there's a release checklist which kind of goes in a lot of detail, the release process as far as how to generate the change log, how to cut the tags, how to cut branches and so on. Another good resource is the onboarding guide which explains a lot of what we look at when reviewing PRs, how we triage stuff, what things we look at from there. So if there's any questions, I'm happy to answer questions and I can put these links into chat as well. There's one question in the chat, Adam, and I'm gonna set up the screen share for the next session while you're at it. The question is, how do people figure out how long their PR will wait to go out? So as soon as it's merged, it will be assigned a release. If it's a feature ad, it will go into a minor version which is released every three to four months. So depending on where it gets merged in that cycle, kind of depends on when it will go out. If it's a patch, like a bug fix, then it will go out sooner. Those go out every month, I believe. So it really depends on where it gets merged in the cycle. Looks like there's another one. What can go into patch release versus having to wait for a minor? Patch release is bug fix. So any pull request, that's changing behavior that was operating as unintended. So any corrections goes in a patch release. A minor release is any feature added. And yes, I'll post the slides. In the Slack channel. Okay, thanks. Thank you. Thanks, Adam. All right, we'll just slide on into our last presentation. Karen, do you want to take it away or do you want me to? I'll just hand it back over to you, but I'll start it. All right, so we're just going to talk about where do you go from here, right? Like you learned all the stuff today, but what does this actually mean and what are the high level takeaways in case probably don't remember everything. Yeah, so next slide. All right, yeah, so I got, Karen put together a couple slides. Couple slides that just kind of summarized some of the stuff right here at the end. One for new contributors and one for people who are interested in being core maintainers. So what are the next steps for getting involved? As a new contributor, we do our best to label issues in the various repositories for Helm. We try and add good first issue as a label on anything that we think doesn't require a whole lot of background to get started with. Documentation also always a good place to start because often that's the first thing you're looking at when you come into the project. And you'll notice, I think it might have been Martin who said it earlier today that we spend so much time so close to the project that we no longer have fresh eyes to see the mistakes we've made. And a lot of times the documentation makes perfect sense to us because we've been doing this for so long and contributors come in and say, this doesn't make any sense. And so it's always great to have someone come in and say, we could clarify this. So docs are always a great place to start. Karen, you wanna talk about developer call and chat stuff? Yeah, so we hold weekly community deaf calls. They are on Zoom. They are Thursday at 9.30 a.m. Pacific time. So anyone is welcome to join. We have an agenda. So again, we do these every week. So they're about an hour or so. And yeah, join us if you just wanna kind of meet the people who are driving the project or if you have any questions, things like that. And then in the meantime, we are on the Kubernetes Slack org. If you wanna chat with us, there is a home dev channel and also a home user channel. Yeah, anything to add there? Nope, that sounds good to me. Similarly, if you're interested in becoming a maintainer of one of the projects, that community call is a great place to start getting connected with other developers. I know that doesn't work for all time zones. And we apologize about that. We kind of had to pick a time and sort of stick with it. But if that time zone works for you, that community call is great. But Slack is always open. There are always people interacting there. So that's a great place to get to go, have good discussions with other contributors and maintainers as well. But really, if you're interested in getting into the maintainership role, doing some work inside of Helm, whichever project is particularly interesting to you, that's always great. Issues, helping other people on issues. A lot of times the things that jump out to us is somebody posts an issue and says, I've got this problem, it's broken. This is broken. And it's not enough for us as maintainers to be able to jump in and follow up. And someone else will come along and say, yes, it's broken. Here's a case to reproduce. Here's how I figured out what was going on. And that kind of thing right there makes a huge difference to us. So that's definitely one of those things where if you're looking for a way to get involved and potentially become a maintainer in the future, those are great kinds of things to do along with PRs and filing issues and coming to the meetings and being in Slack. And of course, Bridget's name is on this slide because this is how we do collaborative slides. We just keep adding letters one at a time until it spells a name. And Bridget was the one that came up. But you can reach out to any of the core maintainers. Bridget is one of the core maintainers on the docs repository and also active in all, pretty much all aspects of the community. Since Bridget turned on her video, I am going to let her unmute as well. I didn't realize that my name was on a slide. And apparently the step to getting involved includes, reach out to me. You can reach me on CNCF Slack or in the Helm channels or in the community call. And we are thrilled to chat with people at the community call. I see people in the attendee list here who have come to the community call, brought up issues, gotten their PRs adjusted and merged. So, actively engaging with people. Yeah, so that's how you can get started in either of these roles. Thank you very much for coming, for participating, asking and just hanging around with us for a day. I really appreciate it. I'm going to turn it back over to Karen who gets to leave the really, really exciting part of the day. But before that, I'll remind you and she'll probably remind you again, please go ahead and sign up for the free gift that's in both the Slack channel and also the chat one here. It's really cool. It's a custom design thing just for this particular event. And we just wanted to give you something as a variation for your coming and participating today. Karen, I'll hand it back over to you. Cool, next slide. Thanks, cool. All right, so we're going to go over highlighting some contributors that have been nominated by our maintainers for their hard work on the home project. And yeah, I mean, a huge thank you to these people. We, like I said at the very beginning, like we know it's people in communities that are at the heart of the project and we want to make sure that people know that their work doesn't go unnoticed. So next slide. All right, so these nine people have been nominated by our maintainers for their work these past few years. I'm not and I are going to share some comments that people have dropped in about why these people deserve some recognition. So to start, we have Bridget. As you can see, Bridget has been very active on the call today, kind of just filled in questions and just being helpful overall. Someone just wrote like too many things to us to thank Bridget for. And from my end, you know, like I work a lot with Bridget and I do see the work that she does. And she also, I think recently we wanted to see if chocolate carry water forward. So, you know, this is very in line with what she does and thank you for your recent work on updating and translating the docs and also, you know, getting all the good first issues in order as Matt just mentioned. Matt, do you want to do the next slide? Yep, not all the people on this awards list are official core maintainers and Karina is a good example of this. She's been coming regularly to many, many helm events. She's here today. I have personally watched her encouraging community members to participate translation and documentation to PR reviews and feature additions. And she's been just a very, very profoundly positive personality in our community. And thank you very much. Well, next we have Leo. Leo has been working on OCI support for the last year. So, you know, we just want to thank him for continued dedication to his work, to the home project. And he is also not a maintainer necessarily to helm but he is contributing to the project overall. So we felt that it made sense for him to be included on this list. Yeah, it has been excellent to see these awards come in from nominations for both community members and core maintainers and people new in and people who have been around since practically the project's beginning. Mark is another one who has contributed with helm. I liked how the first person to nominate him said something that Mark has contributed in unique ways because he jumped in by filling out something that all of us had kind of wanted but nobody had had time to do and done auto completion for us to turn it into a far superior thing to the thing we had originally started. And then he too has become just an absolute powerhouse core maintainer who has done a lot of work in service of the community and really helped many, many people bring their ideas through the issue queue, through the PR review process and into production. Awesome. Next we have Martin. Martin's been in driving force from the never ending helm two to helm three migration. He worked on the helm two to three migration tool and maybe some of you know that he also helped with our recent helm two to three migration workshop. With that work he's also collaborated with a lot of other contributors and helped bring them on and get involved with contributing to the code base and all that. So thank you Martin and thanks for always staying up late. I know it's like for you even to help with our events. Matt Farina is the kind of person who would always be at an event like this. However, first thing this morning he sent me a picture from the beach where he is on vacation with his family. Matt has been a long time collaborator on the project. He's one of the org maintainers. He diligently answers questions on the issue queue, helps review PRs. He's a member of the security team. As one person said about him, and I totally agree with this, if Matt Farina was gone, I feel like the project might just grind to a halt. He's gone right now. So we'll see over the next couple of days if it does. But thank you Matt for everything you've done. Seeing those from Matt Fisher, similar things were said, Matt is always there to help building complex questions and just being really nice and gracious overall. And you know, Matt's been with us, been with the project since like very early on. So thank you for your dedication all these years. And Reinhardt, I believe that the statistics show Reinhardt is actually the most active member of the community as far as PR reviewing and PR contributions go. He really transformed the charts project from a rough collection of a few dozen charts into what it was when we transferred that all over into Artifact Hub. Since then he's been instrumental in many of the other projects around the Helm ecosystem, including the GitHub Actions project and the chart testing tool. He's always friendly, always helpful. And continually is the one who does a lot of the chopwood carry water update things kind of work that is absolutely critical for a community of this size to be able to continue to function. Cool. And last we have Scott. You know, Scott has also been involved for a while now and he's been a notable advocate since the beginning. And his latest thing that he's been working on is just his work on the charts project's deprecation and he's been doing an awesome job helping people migrate their charts from the stable repo to other repos. And so we just wanted to highlight that. And also Scott has been super active in a lot of our events helping out there as well that I wanna highlight. So yeah, you all will be receiving a little award in the mail. I'm still working on this, so sorry it's late, but it's coming, I promise it'll be cool. And yeah, we hope that more of you join us in contributing to the project in the future. And we wanna make sure again to like highlight the work of all of you who do important work and know that like it doesn't go unnoticed and yeah. Again, thank you all very much. Cool, last slide. Thanks everyone for joining us today. I have a short survey, it's like five questions. You know, this was our first time doing the contributor summits. If any of you have feedback, we would really appreciate that. I know things were a little bit tough with like technicalities and all that. So aside from that content-wise and just like making sure that like you learn stuff, please let us know in the survey. Again, there's a link for the attendee gift. And then if you wanna stay with us, stay in touch with us. You can catch us on Twitter. Our handle is at Helmpack. And then if anyone wants to join our mailing list, you can do so at building provided as well. If you join that list, I think you should also be able to see the Helmdeb call calendar invites. All right. Any parting thoughts, Smaa or Bridget or anyone? Again, thank you all to the presenters. I did a lot of work to prepare for everything today. And then we were somewhat thwarted by the flakiness of the platform. But I really appreciate the work you all did in putting this together. And thank you to the many attendees who came, asked questions and participated. And we look forward to seeing many of you virtually at the next KubeCon, which is coming up in only a month. I can't even believe it. All right, thanks everyone.