 Welcome to the session, Helm, Past, Present, and Future, where some of the Helm Corps maintainers are going to tell you about where we've been, where we are today, and hopefully where we're going in the future. I'm Matt Butcher. I'm Matt Farina. And I'm Bridget Krumhump. I think a lot of people are interested in what will Helm 4 look like? And there are many possible answers here. Butcher, do you wanna give us some of them? I think Helm 4 is going to present a very interesting problem that we will wanna solve. And that is, what exactly is Helm? Is Helm really and truly just a package manager? In which case, have we gone too far with many of the things we've done in Helm 3? And is it time to pare down and get back to basics? Or are there still major features out there that Helm needs and is still lacking? Or is it some mixture of the two? And I think that's gonna be the number one thing that the Helm maintainers are gonna have to solve as we kind of look at and move toward the roadmap for Helm 4. So what you're saying, we have the floor wax and dessert topping problem that we need to figure out the answer to. What's our move here Farina? I don't know. The one thing that I wanna see in Helm 4 is probably a little bit more consistency because some of our APIs and some of our stuff is differently. Like the way we have a couple of different default time formats. So if you're using Helm as a building block for other things, which happens with package managers, you want a certain amount of consistency. And I think because we really put a focus on that in Helm 3, we can learn a lot and improve on that in Helm 4. And so that's kind of what's on my mind. But again, I don't know because we've got time. Yeah, absolutely. And you both bring up some interesting points and I think we'll explore those a bit more. Just the idea of package management, deployment. It will help solve all of your COD problems. There are so many possibilities here that I think we probably answer this by going into the wilds of the past. Karina, weren't you just telling us about, you just looked at some dates. How far past are we talking? So if I remember right, the first commit of Helm was October 19th, 2015. So just over five years ago now, Helm, the first commit landed. That started this whole thing off. And it all started because of a company that Matt Butcher worked at. Deyes had a problem with their Paz called Workflow. Matt, can you talk a little bit about what that problem was? Yeah, so we were building, coming out of Engine Yard, we had already built a product that was a general Paz system that ran on a number of orchestrators. And we decided that we wanted to take that platform and rebase it on Kubernetes and really focus on taking advantage of all the things that Kubernetes, which was at 1.1, 1.2 at the time, all the things Kubernetes had to offer. But one of the first things we discovered was the process of installing our own Paz on Kubernetes was tedious, sometimes taking hours and hours to just get all the YAML manifests uploaded and things like that. And we realized that there had to be a better way. And in reality, we were just dealing with a set of YAML manifests that we needed to upload and install in a particular order. So there had to be a better way to do that. And that was kind of the core intuition behind what became Helm. And this came out of a hackathon, which would lead you to believe that it perhaps didn't start with very specific and detailed planning. Farina, you joined the project slightly after that hackathon. Can you talk a little bit about the decisions being right versus right now or how you even involve a project that starts small and becomes giant? So by the time I had joined the project, Helm had already merged with a Kubernetes project called Deployment Manager. And that's where Helm v2 came about. And then after that, it was now under the Kubernetes umbrella and Kubernetes after that joined the CNCF. And so we had Helm version two and we had charts. And that's kind of where I got involved was after Helm version two was out the door and we were looking at how do you grow Helm and what's useful? And if you're gonna have a package manager and you want it to be useful, one of the things you need is tons of charts, tons of content, tons of packages for people to install and use. When I came in, there was lots of manual curation happening and quite frankly, if you're doing lots of manual reviews of the same thing over and over, it presents this great opportunity for automation. And that's where I jumped in and I helped automate a lot of these reviews and get it going so we could have, you could install anything or tons of things. And we ended up with hundreds and hundreds of packages in the stable repository that people could do. And then packages elsewhere through people's own repositories. Ironically, this was not the first package manager, Matt Farina and I worked on together. And Matt Farina and I had written Glide, which was a package manager for Go just a few years prior to this. I remember that. Yeah, yeah, and it was interesting to go from that set of design constraints, the way you're thinking about a dependency management system that's largely about managing source code and then switching over to something like Helm, which was really more an exercise in something closer to an operating system package manager. And back then we used to talk about that all the time, how in our minds, if we talked about Kubernetes like an operating system, the next kind of, or the next evolution of an operating system up the stack, then Helm was really closer to something like apt-get or homebrew. A lot of our very early design inspirations came out of those because of the metaphor that we kept saying to each other, what if we just treat Kubernetes like an operating system? What if we just treat packages like the same way you treat installing something, a new binary package onto your operating system? While at the same time, we had already come from a background of working together on package management systems, it actually presented so many differences this time that Matt kind of dove in on the packaging side at the same time that I was really focusing more on the command line side and the tiller side back then. So it was a totally, in spite of the fact that it was a common ground in some ways, it was a totally different experience for us than building Glide had been. Yeah, now you've mentioned a number of pieces there that I think that you've gone one direction and then another on with the benefit of all of the hindsight, what now would you do differently than you did in the last five years? So this begins the four hour portion of our show. There are some things that I have often wondered if they could have been done better and try to remind myself of what the constraints were at the time. Your original helm did not have templating at all and an early prototype of it used Unix style environment variables and had no real programming logic in there. But we ended up settling on Go templates and largely we settled on Go templates because it required us to build very little. We could basically start with core Go and build from there. I have often, this is the thing, one thing I go back and forth on the most when I look back on helm is, did we make the right choice there? And earlier you said, the contrast between doing it right and doing it right now, that is exactly where Go templates fit in, right? Because we said, okay, there are languages like Jinja that are fabulous template languages, but there was at the time no Go implementation of Jinja. And so we did talk about what would it take for us to write an entire template engine and do we wanna maintain that? Do we want to be able to pipe this out to any external templating engine, something we tried that went, in my opinion, very, very poorly? Or do we go with a default template language and hope that it works well enough? And we really ended up choosing Path C on that one. And in some ways it served us well because we haven't had to maintain this huge body of code. But in some ways I feel like we had to cope with the template engine changing from under us when the Go developers decided that things were gonna work differently. The documentation honestly has been very poor and we've had to really kind of, Matt could probably speak to this better than I. We've tried several different ways of trying to document this, but it ends up becoming keeping the Go documentation in sync with the helm documentation and it's been an interesting chore. In the end, was it a mistake? I guess probably if I were in the same place again, I would make the same decision, but that is probably the number one that I still lose sleep over. And that's a really good insight and I think actionable for people when they're trying to do technical decision making. You know what? There will never be perfect future knowledge and so you have to do what is expediently and hopefully also offers you options. Yeah, and the thing that I look back on is a little different. I look back at the security angle. So you can sign charts and have provenance files and it uses PGP to sign those, but it hasn't taken off in the public avenue. I mean, there's some people who use it to a great degree for internal security and things like that. When you go get lots of public charts, they're not signed. You don't have that provenance that you know who you're getting it from, that you know you get that security angle. And I would love to have seen something that made it more accessible to people and made it easier for them to sign their charts, to verify them and to make that security angle more widely spread. I don't know how to do that, but that is one of those things I would like to see. And I remember telling Matt, it's gonna be hard for new developers to come to it, but they're all going to see the benefit of doing this and by default, we're gonna see 90 plus percent of people signing their charts. I would guess that now, four and a half, five years later, we're still at what, maybe 2%, 3% of people sign their charts, even that, I read that high. Yeah, yeah. So if we can improve that security angle and maybe that's something to try to figure out for Helm 4. So what's going on with Helm right now? I know I've been giving talks and writing blog posts and we have a workshop that we'll be doing about the Helm v2 to v3 migrations. That's very front of mind. But I would love to hear from you folks. What do you think are some of the right now things present in the Helm project that you're focused on? So I guess the first thing I'm gonna talk about is charts. So Helm had this period of growth and popularity, huge growth and popularity and we had the stable and incubator charts repositories. And they were, I mean, we had so many charts in them and quite frankly, maintainers would come in and try to help maintain them. And we went through periods of burnout because there was so much activity. It's so much to do every day. And that's where we came in. There was lots of automation that was added, but it still wasn't enough. We had maintainers burning out, running out of time. And so instead of having this one repository, which was really example charts that morphed into, hey, here's the common repository into now a distributed model where you have charts everywhere. And then we had the Helm hub because once you move everything distributed, you still need to be able to find it. And the Helm hub, it just had so many users and it helped them do that. But the software that powered it was originally designed for these on-premise, smaller installations. And there are more things than just charts out there. And so the artifact hub project, which is a CNCF project was started. And so now we point people to there because now with these things that are distributed from companies and organizations all over, now we still have this central way of finding them when we broke this up. But it was done so that way people could maintain their charts themselves and not have to wait on a handful of maintainers on the Helm project. And it kind of allowed us to scale because we don't scale. And then we can work on those automation tools, those people who were interested in charts can work on automation tools to help all of those different people scale out their chart repositories, chart tests, chart linting and things like that. Yeah, I would add to that two things. Artifact hub has been, I think one of the most exciting developments on the CNCF landscape in the last year, in part because the danger as any community gets to be the size that the CNCF community is, is that we will begin to fracture by default. We'll just, everybody will naturally sort of drift apart because we've all got our little fiefdoms that are all getting their own development. Artifact hub was a great example of representatives from different communities inside of CNCF, different sub-communities coming together and saying, you know, at the end of the day, we all have to figure out how to get our particular software packages in the hands of our users. That's a common problem. Can we do a common solution? And so I think the work that Matt Farina and many others have done in unifying different projects and around the idea of the Artifact hub and then getting helm, you know, right away as one of four or five projects migrated over to that has been a sort of a very low key victory for CNCF as a whole. And I'm still very enthusiastic about that. And I think that I would like to reinforce another thing that Matt Farina said, which is how important this idea of switching to a distributed chart development model is gonna end up being. One of the big insights that the chart maintainers had was that many of the more vibrant packaging communities don't require central authoritarian editorial control over things, right? If you look at NPM or any of the major operating, I mean, sorry, programming language, package management systems now, they're all largely developers, self-motivating and self-curating their own stuff and the core maintainer roles are really more about keeping the infrastructure running. And I think that moving from a centralized chart repository into those kinds of areas will not just protect helm developers from burnout, which has been a very big problem for us to be honest for all, for every single one of us. But it will also really empower the community at large to be able to take on a role of curating but also rapidly build their own charts and their own packages for the things that they're actively developing on. As the helm project has grown and you've realized that it doesn't make sense to try to do everything centrally or on a small scale, just kind of making the one off call here and there, you also have introduced the project has introduced the helm improvement proposal and or HIP and is it fair to say that these HIPs don't lie? Like, what are we doing with the improvement proposals? So that was started by another Matt on the project, Matt Fisher. And they're kind of like Kubernetes caps, Kubernetes enhancement proposals, where it's a way to, when you're a fast-moving project who's just figuring things out, you can just put up pull requests or you can file an issue or just throw a proposal out there. But now, whenever you have a big change on a mature project, just like Kubernetes has, you want to take the time to think it through to look at the implications, to understand who the end user is and their impact because there's so many users of helm. We don't want to hurt the 80% to please somebody in the 1%. And so we need to think through features and how they're implemented. And so the HIP process lets us look at something, propose something, talk it through, analyze it, add details to that before we ever go create it because we've had many cases where somebody comes with a pull request for a neat idea and it's a lot of code and we want them to rewrite it because it doesn't fit the mold and I feel bad for them when they have to go rewrite it or they've put a lot of code in it, something that may fit better as a plugin or as maybe something that should live as a different layer on top of helm and use helm as a dependency. And this gives us an opportunity to think it through hand, through beforehand and give good feedback and curate it for any big ideas. Yeah, I very much like the way you frame it as something that is intended to be more respectful of our many contributors because reiterating, it's really hard to say to someone, I know you did a lot of work but there are a number of things here that we need to consider before doing that. Do you mind rewriting several thousand lines of code? Also, I've liked the way that it's become, that the HIPs have become a form of documentation so that we can point people toward a very definitive explanation of how it is supposed to function, what it's intended. I believe that one of the things we're gonna notice in the future about this is that HIPs prove invaluable when we start dealing with bug reports and we're trying to figure out if the report is, should have been doing X according to the HIP and it wasn't or if it was really more of a like Soto Voce feature request or something like that. So I think this is gonna be a good thing for the community going forward because it's gonna help give us not just assuredness that we're doing the right things in the PRs but also the benefit of being able to refer back to something in writing that's definitive. All right. I was gonna say it also lets you for future maintainers. One of the things when you come into be a new maintainer on a project is you've gotta figure out what's going on. Why are we doing things this way? What's the history behind it, right? Because you may not know, and I know that because I've stepped in more than one project after it's already been started and had to understand how did we get here? Why do we make those decisions? And I'm reminded of a HIP you're working on at Butcher around CRDs and the pain points because we've had more than one go at how do we figure out and try to solve this and where are those hard spots? So we can try to solve those hard problems while thinking through cases for people who aren't like me but are really relevant to the project. All right. And another area where we're really seeing the HIPs work well I think Bridget, your HIP number three is about to get merged or is already merged, governance. Turns out you gotta write it down and it's probably good to think through your governance for your projects out there before you have a problem. Just pro tip. Farida, because you already jumped on that third rail of CRDs do you wanna give us the short version of the cliff notes? What is going on with CRDs and Helm and where do you think it's going? So CRDs are hard, right? Because if you delete a CRD it deletes all the custom resources in your cluster and that could cause production outages, right? Imagine doing some kind of Helm uninstall because your app has an issue and next thing you know you deleted your CRDs and then all your custom resources are gone and then all this stuff got deleted from your cluster and not even just for you but maybe for other tenants and things like that. There's really possible cascading problems. And so we're very cautious around CRD handling because the gotchas to it can really hurt you. And so we're looking for ways to try to make it easier. We'll also trying to shield the gotchas. And so if somebody wants to sit down and talk about that next feature we're happy to walk through it and see will it help? Will it not? Does it require Helm 4? Can we fit it into Helm 3? But realize we're taking the conservative route because breaking production as Matt said earlier is something we don't wanna do. And so we're being very cautious about it but we do have a couple of different ways to handle CRDs today that do take that cautious route but anything else is gonna be very slow and work through and experimented on. I think one thing that we should explain about when we talk about the present and the future is how seriously we try and take backward compatibility in Helm and following semantic versioning rules which suggests that you do not ever introduce a breaking change without incrementing the major version number of something. So in many ways, when we start talking about Helm 4 we have these dueling instincts where, in fact, it was illustrated earlier in the talk, right? Where we have these dueling instincts between looking at Helm 4 and saying finally I can fix that one function call that's had a vestigial argument for the last eight months. And those kind of minor things that in the grand scheme of things are minor but because of our compatibility guarantees we will not change until Helm 4. So we've got those on one hand and then on the other hand are the things like the CRD problem where we're saying, okay, this is a very big issue that is impacting many, many users of the Helm ecosystem. We committed to something for Helm 3 and we're gonna keep plugging away at that for Helm 3 but does Helm 4 offer us some new chances to try something out there that we can't do today because it would break? And this is where a lot of the hip that I had worked on was saying, look, we're stuck for Helm 3 inside of these fairly narrow guardrails but in Helm 4, if somebody can come up with a really intriguing way to work around many of these limitations in the system we can kind of go big at that point and work on thousand line PRs that break interfaces and change the chart format and things like that. I'm curious if either of you know of other ones like that that are big issues coming in Helm 4 where we could potentially do something big with charts or with the indexing formats or how repositories work or things like that? Some of the things I think are coming in Helm 3 anyway like there's the OCI artifacts. If the OCI distribution spec goes 1.0 with artifact support, then we can push and pull Helm charts from OCI repositories but that can be a Helm 3 thing and it's a future thing but it's not something that requires Helm 4. It's not a breaking change. That's good, that's good. Butcher, I know you were talking a little bit before about how we have these theoretical hips and pull requests coming in from some amorphous person. What does that look like? Like how does somebody get involved? Yeah, and this is one of the things that I think has really changed over time too. Taking that look all the way back to the beginning, every pull request was immediately considered and within days we usually had things merged but we've had to take a more metered approach now and hips are really, really helpful. I'm really happy with the way they're going but it does change. We've had to kind of redefine the way that we're asking for people to join the community and participate in the ongoing development of Helm because of that. On one hand, it's fairly simple. There are, we've added more templates to fill out when you file an issue to help us rapidly figure out what it is because the volume of issues that comes in is high and many of them are answerable by reading the documentation but when people don't find it in the documentation they'll file an issue, which of course is fine but it's meant that we've had to ask for lots and lots of different things to be filled in in those issue templates. It's always very helpful to us when people jump in on the issue queue and say, oh, I can also reproduce this. Here's some more detailed information about how I reproduced it. Whereas it's very unhelpful for people to say, me too, with no additional context and it's been interesting how reliant we are getting on the detailed analysis that our users are being able to provide. That is fabulous and I thrilled whenever I see people following on and saying, here are more details about how to reproduce this or even better, I traced it down to this area of the code. You might not be able to do the PR yourself but giving us detailed debugging information often will help us get from the point where we're saying, oh, this is frustrating. I don't have enough. I'll come back and look at it two weeks later when I go through my next sweep to, oh, okay. I've got enough that I can reproduce this now and get going. Yeah, some of the bugs we get now are such edge case bugs that it's something in the dependency of Helm that we pick up from Kubernetes that's only affected on a certain operating system. And so us on a different operating system primarily may not even experience it and then trying to troubleshoot that and get an issue filed or a pull request fixed somewhere in the chain. It can be difficult with some of those now. And a lot of Helm maintainers currently work at cloud providers or vendors. But if we don't have a maintainer who we can tap to say take a look at the thing for your cloud, sometimes we just reach out to other members of the community in the Kubernetes community and beyond and say, hey, it looks like this is a problem that is specifically affecting the users of your product or your service. Can you take a look at it? And the community has been really responsive with that. So it's always nice when you can get the warm handoff instead of the just go start over and some repo over there, we don't know. Like if you can get members and I have seen members of other communities come over into our issue and start talking to the user there and it's like, oh, that's fantastic. Most people think of the Helm client, the Helm CLI and it has the library that other things will build on too. But we also have the website, the charts maintainers. We have all kinds of tools for using GitHub actions to deploy your repository and GitHub pages or for testing charts that goes more beyond what Helm tests and Helm lint do. And we've got a number of different projects there's chart museum and each one of these projects so anybody can become a maintainer on. Once you're a maintainer, you don't just write code and push it up, you actually help other people be successful in contributing. And so we're looking for people across all of the code bases we have who are interested in doing that and willing to and there can be things that aren't scary like the Helm client and aren't just code. They can be docs and other things as well. Yeah, I really appreciate that perspective because I think that tech in general can often prioritize and elevate the code contributions which are obviously important but as you're alluding to are also not all of the everythings. So Butcher, I feel like you mentioned a little bit before so I'm gonna bring it back to the beginning. You mentioned templating as something that maybe could change in the future. And I thought spoiler alert, okay, what is your thinking about where Helm could go? Be futurist. Strictly in terms of templating, I mean, we have this chance again to revisit some of the crazier things we've talked about in the past. We have talked about doing embedded run times, Lua, JavaScript, WebAssembly, something like that that can help build a chart dynamically. Not off the table yet. Definitely no, we definitely haven't signed up for it yet but there are some things like that that are really exciting that we might be able to really push some boundaries and some interesting ways to really open it up for people to do some deeper and more robust things with their charts as they're either rendering them or installing them. Fabulous and Farina, I'll ask you the same question because you mentioned security as a thing that we could do a lot in that space. What's your vision for that? Well, I'd like the experience. The technology I haven't figured out which is a hard thing to figure out especially if you break away from PGP which is what's most widely known but a really easy way to sign and to verify and to share keys would be just a fantastic thing that may end up being notary version two which is trying to figure out some of the problems we've got with notary version one in our OCI and our ability to use those registries but being able to figure out how can we make it easy where somebody can sign without having to learn how to use new GPG in order to sign and try to remember what those command line flags are. I do it all the time and I still gotta look them up. How can we make it simple and yet work with hardware security and things like that because security is important. That would be a great thing to dig into. I love it. So let's wrap this with what we've learned that lets us give dispatches from the future to people who are on this journey with their cloud native projects and I'll start with it's a lot harder for us to deal with translations in our command line output that we also generate some docs from because that English is hard coded into the code and being output by the home binary itself. And so while we do have people translating the website docs into Japanese and Korean and Chinese currently and hopefully more languages, if you're into that, be awesome, just translate a few pages. You don't have to translate the entire website but the problem of, oh gosh, people are starting to roll up with translations that we can't currently put into the command line to output in language of choice. I don't have a solution to that but that is a thorny problem that I would love to see us work on in the future. And if, so my advice to people who are on this path is be very careful about what is possible and not possible to localize because people are gonna wanna localize all sorts of things and places where they can't. Just think hard about where you can send them instead to output that can be put in their language of choice instead of hard coding, English or whatever. Yeah, so Farina, what are you thinking along these lines? The thing that I keep coming back to is listening to our users, right? I think that's one of the biggest things that a project can do is listen to who your end users are and try to understand them, right? I remember KubeCon, CloudNativeCon several years ago where we literally had appointments with different end users and we sat down and we just interviewed them and we listened to them and we took what they had to say, what are they being successful with and where are their issues? Because you wanna learn, what would they like to do that they can't do now? And so I think as we, other projects should do the same thing. I think every project that wants to be successful, listening to your users and understanding them is so important. But I think even as we roll into Helm version four, spending a good amount of time sitting down and listening to those users who rely on Helm will help us have more insight into where we should go and what we should do next. Well, for a talk about the past, present and future, I suppose it fits in well to say, I feel like I've come full circle on something here. Early on, we were obsessed with the user story of getting somebody started in Kubernetes. Kubernetes was young, moving fast, very conceptually difficult and we wanted Helm to be the way you got from zero to your first Kubernetes installation in a couple of seconds, right? The Helm two period and up into the, up into the mid-Helm three period now has largely been focused on solving a whole bunch of other problems around that. But now what we're experiencing is a huge influx of new users to Kubernetes. And Kubernetes is not any simpler. It's actually far more complicated than it was several years ago. So once again, the thing that I find myself feeling most passionate about is, can we keep making Helm or even improve this story for Helm for that first experience coming to Kubernetes and getting that first installation done in a few seconds and then using that installation as a springboard to understanding what's actually going on in Kubernetes, how pieces fit together and how they work. And I feel like we got to leave it at that and say, this is a community that is interested in you. Come talk to us. We have all sorts of community engagement opportunities and we'll see you on GitHub.