 Let's go ahead and get started. I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, a conversation about Helm 3. I'm Karen Chu, Community Program Manager at Microsoft and Cloud Native Ambassador. I'll be moderating today's webinar. I'd like to welcome our presenters. We have Brigitte Kroomhout, Principal Program Manager at Microsoft. We have Matt Butcher, Principal Software Engineer at Microsoft. We have Matt Fisher, Software Engineer at Microsoft and then Taylor Thomas, Senior Software Engineer at Microsoft as well. And before we get started, just a few housekeeping items. During the webinar, you will not be able to talk as an attendee. There is a Q&A box at the bottom of your Zoom screen. Please feel free to drop in your questions there and we will get through as many as we can throughout the webinar session. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all your fellow participants and presenters. And with that, I will hand it over to the Helm team to kick off today's presentation. Awesome. Thank you so much, Karen. All right. This is very exciting. I feel like if you're joining a webinar about Helm, maybe you think I want to know all the latest and greatest. And I already know a lot about Helm. But maybe those of us who are joining or new to Helm would like the elevator pitch of like, what even is Helm? So let's get started with Matt Fisher. Kick us off. Tell us something about Helm. Hi, everyone. I'm Matt Fisher. As Karen mentioned, I'm a software engineer over at Microsoft. So Helm itself, for those who are unaware and who are coming to the first time and have never been introduced to Helm, Helm is basically a package manager for Kubernetes. It was first released over in 2015. And I have been both a user of Helm for a very long time for Helm 1 and parts of Helm 2. And then I became a core maintainer somewhere around Helm version 2.2, 2.3. So that's the introduction that I have. Yeah. So in terms of this package management for Kubernetes, I mean, that sounds exciting. And then we look at some of the numbers and we think, all right, it's the third most popular CNCF project right now. There's over a million downloads a month. But what does this mean in terms of what people are using this for? I think that probably I would say Taylor, Butcher, one of you folks can jump in and tell us what exactly does Helm solve for people? So what Helm exactly kind of solves for people is that so when we were first talking about it way back in 2015, we were building this platform. And it was basically a product that would be built on top of Kubernetes. And what we were running into compartmentally was that we were trying to shift this application or shift this product to our customers and to our clients so that they could actually deploy this in their own Kubernetes clusters. And when we were first doing it, we were kind of doing some consultation with people and getting some feedback from them. And one of the things that we were finding out is that during the first consultations, they were basically having to maintain a whole bunch of different files and state in order to upgrade to the next version or upgrade to the next release. So for software engineers or even for operations people who just wanted a way to install the package and just wanted the latest version of that software readying to go for them, that's where Helm really came in and that became that solution. So what it helped us do was basically allow us to package up our application and ship it to customers so that they could install it, manage its state, and then do upgrades from there in a simple, like a package manager way, which is doing an upgrade and then doing rollbacks or things like that. So that's really where the user story originally came from. And then going forward from that, it's been about continuing with Kubernetes' user stories and continuing with changes and advancements that have happened in Kubernetes ecosystem. So that's really the core story about Helm has been about being able to manage that state upgrading, installing, and making it easier for people to package and ship their applications to other people. Yeah, that first version of Helm, which you're sometimes referred to as Helm Classic because it never technically made it to 1.0, it didn't even have a template engine built in. We were just completely focused on the idea that what we wanted was to be able to ship a bundle of Kubernetes manifest so that we could repeatedly install it in different places. And kind of midway through that development cycle, we discovered that templates were something that were really necessary if we were going to make that work long term or some way of dynamically reacting to individual clusters. So we added a template engine, we partnered with Google and then Bitnami and several other companies because we were part of a small company called Deus at the time. We partnered with these other companies in order to start building something that would be more robust and really scalable and something that would last for years, we thought at that point. And so that became Helm 2. The big change in Helm 2 architecturally was we added a server component too called Tiller because at the time when we looked at the way people were doing Kubernetes clusters, it looked like they were going to be centrally managed and we thought that the in cluster component kind of followed the Unix paradigm of having a root user that can install all your packages and only the root user installs packages. And so the difference from Helm 1, which was really as Matt Fisher kind of intimated right at the very beginning, we were just thinking how do we repeatedly install the same manifests and then as we got going, we discovered while there was a whole management layer on top of that and that's kind of where we went with Helm 2. Yeah, absolutely. That is really fascinating because this is kind of the evolution over time that and we have a question too that I think is really germane to this, which is wait a minute, what's the difference between something like Helm and say CloudFormation or ARM templates? Like what is templating your infrastructure versus your applications? I'm not sure if maybe Taylor wants to weigh in on that, kind of say what is the difference between when you would use Helm versus anything else? So for using Helm, it's actually, people have used it in some really creative ways. I think initially it was intended to be for applications. So many people, if you have used Helm, have done something like install the nginx ngress chart or our favorite one that we use for testing and a lot of people use in production, WordPress. You install these user level applications, but also in my own, I've been both a user and a maintainer of Helm for quite a while. And I mean, with past projects, we've used Helm to like configure the cluster and trigger things like automatic upgrades. And so really, it's very flexible and how people have used it kind of shows and the evolutions that we've had coming into Helm 3, the latest version that we released a few months ago, was just how different people are using Helm and how they found it incredibly helpful and incredibly helpful tool in their different projects and businesses with whatever they're doing. So really, it's kind of used for a lot of different things, but the majority of it that people packed it up for our applications, but there are a lot of infrastructure uses that I have seen as well from various members of the community. Yeah. And I think one way we've kind of differentiated between layers where cloud formation and ARM work and the layer where Helm works, initially it was fairly easy, right? Helm really does only do the Kubernetes part, but sort of the mental model we've used is that there's a layer of infrastructure that you set down. Kubernetes is like the top layer of that infrastructure in our sort of mental model, right? And Kubernetes then becomes the Kubernetes API becomes the interface between the infrastructure that the operations team has chosen to lay down and the development team's applications. So which basically exactly the kind of thing Taylor was talking about. So we had envisioned applications as being the prime thing. As Kubernetes has matured, a lot of infrastructure concepts have bubbled their way up. But again, you still have that same division, like consider a horizontal pod autoscaler or even the volume system in Kubernetes, right? The infrastructure team will still select what kinds of volumes can be mounted in a Kubernetes cluster, what kinds of autoscalers fulfill the contract horizontal pod autoscaler. And then there's another DevOps and a developer team who choose from a preexisting menu which ones they're going to use. So we've used that mental model for Helm. It's why we have never tried to take Helm from Kubernetes into the infrastructure operations team concepts. It's also why when you look at other technologies that this team in here has built, you'll see things like CNAB and you'll see things like OMOAM. They both make those same distinctions at the same level so that we get kind of a conceptual purity, but which are targeted toward handling different things. So CNAB, for example, another open source project is able to install things, something, you know, lay down one layer with ARM and then lay it down another layer with Helm or something like that respecting the division of labor or the division of layers, I should say. Nice. So it seems like the answer to just about every question out there is, well, it depends. I mean, well, it's configurable. And I think that's true here, but I think Taylor alluded to something I want to dive more into which is Helm 3. And I've been out there as a PM myself in this space. I've been giving a lot of talks, talking to a lot of customers about Helm 3. I think there's a lot of excitement in this space, but I think Helm 3 is also, I don't even just think, I know that it's a pretty big departure from Helm 2 architecturally. So let's talk a little bit about the decision making. Why did you decide to change all of the everythings? I can kick that off. So sometimes when you look at it, it does seem like we changed everything. If you have paid attention at all in the code, if you've been looking around the code at all, there's actually a lot that's still the same. But what we really tried to do as we came into Helm 3 was address all the different changes that had happened in Kubernetes. One of the main things that people ask, we even had the question in the Q&A, was about Helm 3 and Taylor. We removed it and it was amazing how many times we got people to cheer for that at various events and things. And we get it. We understand why because of how the security model evolved with Kubernetes of having RBAC and CRDs and all these other things that are out there. It just kind of became out of date. It was an assumption that we made that Butcher explained. But then as time went on and Kubernetes evolved, then we realized that it had to go. And so it no longer has that client server model, which serves two important purposes. Number one, it simplifies the security model. Also, it simplifies a user's interactions with Helm. So instead of debugging whether it happened on the Helm side or the Taylor side, now it's just one single application. And it also helped with when we refactored the SDK that can be used in Go, it made that. So we had a more simple interface for everybody to use to get it. So we didn't change necessarily everything, but we really streamlined from basically a couple years now of people using Helm in production to do some pretty crazy things. Yeah. And I would definitely say that it's a sign of maturity in any project, open source or otherwise, if you can remove a major component, because it's we've all been there with the user stories that say more, more, more and add, add, add. And then we end up with, you know, Frankenstein that we get to add more to. And I'm really excited that the Helm community decided that removing a major component was something that they wanted. Butcher, do you want to do you want to address a little bit of how we decided that that was the right direction? Yeah, I mean, I think one of the things I'm most proud about in Helm three is that kind of thinking that what we we we did have some big features in mind for Helm that would have changed the way Helm worked and and added new layers of sophistication. Most of those we ended up removing off of the roadmap, primarily because what we heard people asking for in the community over and over again was practical solutions to problems that occurred over and over again in their clusters and in the systems that they worked with developed on and managed. And so what I am proud about is that most of the features that really are the the billboard features for Helm three are things that are practical problem solving tools that don't change a lot of the philosophy behind Helm, but change some of the ways you can do things and gives you a little more flexibility in dealing with one example of that is library charts. So one thing we noticed fairly early on is that Helm worked well for small small and discrete charts. But I think open stack was really the first one that started to push us in directions we had never anticipated before. So we needed highly reusable chunks of templates that could be used to stand up very very sophisticated network or very very sophisticated topologies. And and we didn't in Helm two have a way of saying here's a piece of shared template and here's a piece of shared template code that you can move around from one chart to another and anticipate that it will always be the same. And that's where we tried to address this problem in Helm three by adding a concept called library charts where the chart itself can't be installed. There's nothing there to install, but it provides a number of features and templates that can be reused by other things so that you can get kind of common layer scaffolding. So that was one of my favorite features that we added in. I can go ahead and say feature what's your favorite thing that we added in? Yeah, I think actually the the removal of tiller was one of the largest things that I think was a big undertaking. It was a big understanding and it was a huge collaborative effort I think understanding those use cases. I recall the first Helm summit actually that was hosted over in North America in Portland was a fantastic kickoff for a lot of those design discussions. So I really enjoyed those conversations and having those design discussions with the community on what they're having with Helm two, what are their use cases, what are some of the issues that they're coming up with and how we can help them address those. That was one of the fantastic things. I think another part of the story that I'm really proud of that we took some effort in this time around was refactoring the actual Go API that's inside of Helm itself to be able to be consumed and used in a way so we were able to basically refactor the internals of Helm in such a way that you could use it and consume it as a Go client instead of having to basically install how the communication worked in Helm two was that you had a Helm client that communicated over GRPC over to tiller and you had to basically set up a Kubernetes connection and then set up a GRPC connection and do all the communication over there. Which is a little bit of a barrier of entry for many people trying to build like Python clients or Go clients or Rust clients, whatever happened to be your language of choice. In Helm three that has been like vastly, it's been simplified such that you can actually build a Go client quite simply with a few packages and you can even pull out some parts like you can unpackage the chart and inspect it or you can do different things with that. Certain use cases with the Go SDK has been improved there. I'm really excited about that feature and I've been continuing to monitor how other people are using it today. Yeah, there's so many exciting features. I think one of the things that you alluded to was you talked about library charts. Can we talk a little bit about what's happening with chart repository stuff because that is also a change. Yeah, I can talk about that one for a second here. I really wish Martin Hickey was here on the call with us as well. He was someone who was working on some of the library chart stuff and then also Josh Dalliske who was really pioneering a lot of the chart repository API stuff. So originally when we were going into Helm three, one of the things that we were really interested in was like a better security story for everything. So that was a better security story in regards to Tiller but that also talks about the entire deployment pipeline from the time that you have the artifact, fetching it down, rendering it and sending it off to the Kubernetes cluster. It doesn't stop at Tiller. There's the whole pipeline before that and all that preamble that you have to kind of figure out. So one of the things that people were really interested in early on was more advanced authentication strategies, more advanced user stories for the chart repository API that we weren't able to handle at the current time. Basically we had two decisions. One is we could invest a lot of time into the chart repository API and build something that could handle those use cases or we could take something off the shelf and we could see what other projects inside of the open source community has built and see if we could try and not only just build on top of them but also collaborate on the project and see where we can move forward together. And one of the things that we landed on was actually the Docker registry is one of those projects where they have an open specification for how you're supposed to store and handle blob manifests, not just things like Docker images but they have a whole repository, they had a whole API around how you get it handled arbitrary artifacts inside of the API. And if you're familiar with the Docker ecosystem, you know that they have a ton of authentication strategies. They've been working at this for many, many, many years. So it seemed like a really easy or a really good solution for us to start investigating. So we were looking at collaborating with the distribution, the open container initiative is actually the group that opens the distribution specification and all that. So we were taking a look at that on how to expand upon that project and how we could actually experiment with that. So what we have right now inside of Helm 3 is we continue to support the existing chart repository API as it stands for Helm 2. And that continues to work in Helm 3. But one of the things that we're experimenting with is how could we use the distribution API or the open container initiatives Docker registry essentially as a new chart repository. And then we can enable things like JSON web token authentication. We can talk about OIDC. We can talk about better signing strategies for charts and all those different things. Not that these things already exist in there, but because that there's already an API and a standard that is established, that helps us where we can just build upon that and we can create that value without having to talk about the nitty-gritty. We can just build upon what's already out there. So I love that because part of one of the things that's most valuable in open source is being able to use these abstractions that have already been laid down for us and make things more interoperable so that people don't have to reinvent the chart repository or wheel again and again. We have some good questions and I know Taylor was coming up with some interesting answers to a couple of them. You want to take one of those, Taylor? Yeah, so I think there's been a couple questions related to migrations and differences between Helm 2 and 3. One of the questions about the differences, I actually sent a link to a blog post that goes into a full list of all the major changes if you're curious about every single one, but coming into this and just to answer the question of converting from Helm 2 to Helm 3 and what that looks like. So as we approach Helm 3, one of our biggest goals is to make it as painless as possible. People put a lot of work into putting the other charts and making sure that people have built up huge infrastructures around having these charts and deploying their applications. So we wanted to make it as easy as possible. And so for the most part, charts are backwards compatible with a few exceptions. There was, I mean, I guess the biggest one of all of it is that you have the CRD changes. So CRD changes are no longer done in a hook, they're done as a pre-install step that is performed before any other templating or anything else is done. There's a whole bunch of technical reasons for that. We can go in more in-depth if people are interested in that, but basically that happens beforehand and they're no longer templated. And so what happens is you can convert over a chart to be a completely Helm 3 compatible chart, but still have it backwards compatible to Helm 2 by leaving your CRD install hooks in place and then keeping and then adding those CRDs to the CRD's directory where they get installed from. The other thing is we have a Helm 2 to 3 plugin that was written by a couple members of the community, kind of spearheaded by Martin Hickey who we mentioned earlier. And this 2 to 3 plugin is actually what allows you to convert your current releases in place. There's a dry run mode there. It comes with full documentation, the help text is fairly well detailed. And so it becomes the system was targeting 2 to 3, the name of the release, and it will move that release to be a Helm 3 compatible release because on the back end the releases are no longer stored in one single namespace and their structure has changed. And so this plugin will go read all that data, it won't mess with your Kubernetes resources at all. It's just going to modify the Helm release and then move it over for you. You can also have it delete things for you if you want or you can go back and manually delete the Helm 2 release on your own. And so converting it is actually a fairly streamlined process at this point, which is a goal from the beginning. Awesome. So we've had some questions also about a lot of people thinking about their migration from 2 to 3 and they want to avoid, you know, operational surprises. We talked about the backwards compatibility and then a number of the CLI options have changed to make them more similar to what Kubernetes uses, like Helm delete becomes Helm uninstall, but Helm delete still works as an alias. There are a few other things people can run into though, like do we want to talk about chart dependency? Yeah, I think so some of the things that we have talked about has been largely backwards compatibility. One of the things that was the big target was of course, like I think Taylor had mentioned before was that one of the big things that we wanted to do was not basically break the entire world and make it such that when people were migrating over from Helm 2 to Helm 3 that they would have to rewrite their infrastructure from scratch. If we would have taken the entire stable charts repository and said, I'm sorry, when you have to upgrade to Helm 3 you have to rewrite every single chart in existence that you've ever written, that makes it a really big barrier to entry for people to migrate over. So one of the biggest goals was that backwards compatibility. Now one of the things, however, is that at the same time that we're trying to maintain that backwards compatibility we are trying to move things forward in a way that made sense for making it easier and making it more sense of some of the packaging APIs that have changed. So one of the things that we made a change for was that there is now a new, it's called an API version bump. So inside of your chart.yaml when you declare a chart with a Helm creator or something like that there's a little field in there called API version. And if you use API version v1 that is that signifies that it was created with Helm 2 essentially that was the old packaging format that was used before. With that in Helm 3 we're able to basically make a new API version to API version v2. And that allows us to introduce some new features that are additional to the existing feature set that was available in Helm 2. For example, like library charts and things like that. One of the things as well that you were mentioning Bridget was the dependency solution which was in Helm 2. When we were first introducing chart dependencies as you can declare dependencies install them and then you would have two packages of a chart installed in what's called an umbrella chart or even just a dependency being declared. Those would be declared inside of requirements.yaml file and so you would do a Helm repository update and you would do a Helm dependency update eventually and that would update the dependencies in your chart and you can install it. We had to introduce that to a new file because of some backwards compatibility concerns for Helm 2. But one of the things that we were looking to do is to consolidate that and make the packaging format a little bit simpler. So in Helm 3 what we've done is both different types of formats are accepted. There's the requirements.yaml file that can be used but inside of the chart.yaml there is now a new dependencies field that if it's found there it'll be read and that basically allows us to consolidate some of the features and cruft that we had introduced in Helm 2 and we can consolidate that into one solid user story moving forward for Helm 3. Nice, awesome. So we have further questions of course as people are figuring out what this move to Helm 3 is going to mean to them and we have a question about a template overriding and I know Taylor has some insights into that. Yes, I do have some stuff here. So in case people can't see what others are asking in the Q&A and there are several questions about how to customize and Helm work together and can they be used together and then others have asked about overriding public chart templates and just generally like what our thoughts are and customizing. So I'm going to answer all those right now. We have been thinking a lot about this at the latest Helm Summit in the EU. We had it in Amsterdam. We had a great talk from some of the folks over at Replicated and their startup in the Kubernetes space and they had talked about this kind of difficulty which I think we've all ran into where it's like I need to tweak that one little thing but it's not exposed as a value and it shouldn't be exposed as a value so what do I do? So we've thought a lot about this and we kind of settled on this idea of something called a post render and so I'm going to repull up that link and I'll drop it in the chat so everyone can see it. But basically these post render hooks we already have a working code for it with a customized example and a and so that PR and the proposal for it which is what I'm going to link to in the chat. And so the idea behind this is that people want to make those last mile configurations or they want to do some sort of a security audit or whatever it might be after Helm has done its job of rendering the chart. And so we've added in this work planning on adding and we haven't added it in yet this post render support for it that allows you to specify any arbitrary binary you would like to that can accept all the manifests on standard out and I mean standard in and it will expect value Kubernetes manifests on the layout. And so it will allow you to do things like customize because some people have always asked like are customized in Helm competing customized in Helm are not meant to compete. And in fact like because so many people have found a lot of value in customized that was one of the driving factors around doing this post render because so many people use that for those little tweaks in last mile configurations. And so if you look in that proposal and in the PR you can see the code and a simple example that you could adapt to something more complex about how to use it with customized. And this is specifically to to address some of these things that people have asked about where like it's really frustrating to have to copy the chart or like how do we use customized with these charts that we're doing but still get all the benefits of using it inside of Helm. So with this your release will still be managed by Helm but you'll be able to essentially shell out to another process and if you're doing it in the SDK it's even more flexible because you just have to implement an interface and you can make anything be a post renderer as long as it implements that. So that's where we're where we've gone and we're hoping to get that merged and should hopefully be out for 3.1 provided there's no big pickups as we go through the PR review of the proposal. Yeah and I think one of the important things to point out there is the kind of the way we view the templating system and all of that in Helm is that we're trying to give the people who maintain the charts the tools they need to expose what they think should be configured to the to the people who use the charts. And largely that has been a fairly good assumption for us to make. However this tool that Taylor is now introducing is designed to help experts cope with the fact that sometimes they would like to use off-the-shelf charts that don't necessarily use all the things they need inside of their cluster. And so this gives a way to do last mile customization using tools like customize or using whatever other tools you do while still kind of respecting the different personas we have in the chart builder persona and the typical chart user persona and then like the advanced chart user persona. And so that's one of the reasons why we're really excited. We've tried several different ways of of solving this problem in the past and most of them have not worked out well. We had stuff that was in the code for years that never got used that was removed in Helm 2 and nobody has even noticed because it was just too complex and too convoluted to be usable. The solution that Taylor's new PR will introduce I'm really excited about because I feel like this is one that makes it you know keeps it simple for all the simple use cases but introduces just the right layer of flexibility for someone who wants to take their chart using to the next level. I think it's amusing that there's a question that's mentioning service mesh just because before preparing for this I was spending a bunch of time on some service mesh related stuff yet turns out it is the same people working on this stuff and one of the things that this team has worked on is called SMI service mesh interface and the idea is just to make all the service meshes that you know and love interoperable get everyone to play well together. And so yes that is something that we're not covering in this webinar but it definitely is relevant to this space and I know Butcher you have some insights with that too. Yeah I was just going to point out that from our perspective if you and this actually may answer some other questions that came up as well. Our goal with Helm was that anything that gets exposed over the Kubernetes API should be capturable and usable inside of a chart. So so the SMI service mesh interface is a defines a particular Kubernetes YAML format that you can use to declare how you want to use service meshes all those work inside of Helm the own open application model stuff that that was released around KubeCon time all of that works in Helm Istio you can configure in Helm and that's been kind of our philosophy is as long as the Kubernetes API will expose it and we can use that same YAML or JSON format to load things into the Kubernetes API it should be capturable and configurable inside of charts. Yeah so we've had a lot of discussion about migration what if someone's interested in Helm 3 but they're working on you know still using Helm 2 and working on their migration to Helm 3 how long do they have what does the support picture look like for that Fisher you want to you want to tell us the good news absolutely I'm happy to answer that question so one of the things that when we were initially discussing this this was about I would say four months before Helm 3's official announcement we started discussing how long do we want to start supporting Helm 2 and how do we want to start how do we support that so we were discussing with people from the community discussing how they were using it what's kind of their comfort level of migrating over a large amount of people were saying that they were just happy just to get rid of Tiller and move forward with that and just migrate on mass to Helm 3 as soon as possible we even have people who are migrating into the alphas which was very interesting to see because we were not guaranteeing backwards compatibility that being said a lot of people were really excited about that however there is a large subset of unspoken users who either work in enterprise or are currently busy with other work that we had to also discuss and talk to as well and as many people know they're not necessarily prone to making massive breaking changes or moving on to the next major release anytime soon they want to make sure that the product is tested out made sure that it goes with security compliances it makes sure that it's okay to migrate forward without like breaking their entire workflow or their product so we were discussing initially I think it was a six month window but then we eventually had a good discussion and actually Bridget you brought up the use case really fantastically in a development call in that we should expand it for one year because at the timeline of when Helm 3 was being released there from the six month window that was in October if I remember it was in early mid-October was when we released Helm 3 six months from that timeframe it would have been November January February March April May and three months of that time was set up for the Christmas holidays a lot of enterprises a lot of companies decide to go out of business or not go out of business they decide to they shut down their for the offices basically everyone goes on vacation and then they have the huge like Christmas rush for sales and things like that which reasonably would give them until mid January until the end of the time frame to migrate over that's a really short window time frame for them to migrate so what we decided to do was we would be giving a 12 month time frame on we would actually go for the migration or the support window for Helm 2 so what that entails exactly is there's a couple of different stages in the support timeframe so moving forward from it was Helm 215 and actually it turned into Helm 216 um I won't really discuss about that but it's mostly about Kubernetes um backwards compatibility stuff but uh Helm 216 was essentially the final feature release for Helm 2 so what that means is as court maintainers when we're looking at new features and new pull requests we're not looking to introduce any new feature or new feature additions to Helm 2 if you have new features or new things that you want to introduce Helm 3 is a fantastic place to go forward with that for Helm 2 our support story has been from six months from the time that we released Helm 2 it would be when we were taking a look at just general bug fixes merging those and continuing to do patch releases so for example right now we're at 216 one right now I believe so the next release would be 216 two or is it 216 three I can't recall exactly which version we're on there's so many numbers in my head um but we're maintaining and going for the next patch release and that will continue on until the last six months so after six months that's the end of the bug releases 12 months after 12 months after the initial release of Helm 3 that's how long our time window is for big security issues so anything that comes on the security mailing list that comes out is like a security exploit we can take a look at those and address those and we can backport any fixes both to Helm 2 and to Helm 3 depending on their relevance but after the 12 month timeframe window that's basically when Helm 2 has become end-of-life support we're no longer accepting security fixes or bug fixes everything just all our development moves forward towards Helm 3 so and and I think that clock started on November 13th 2019 okay yes yes so yeah so that's I like that because that acknowledges the reality that most large enterprises don't have the flexibility to be able to deploy massive changes instantly who knew um okay so I think we we had a question about crd's that I think Matt Butcher was going to address yeah sure uh and and I'll I'll talk about that in kind of a large context so the main question was in the current crd solution how come we don't allow templating of the crd uh so I'm going to back up a little bit and give you a little bit of a historical explanation for what we've been trying to figure out how to do and do well and then go right down into that particular question so when Helm classic came out the problematic type for us was namespaces and the reason why is because all the other resources belong to a namespace but a namespace was kind of like a an apex object a top-level object and if you did something like delete a namespace you deleted everything inside of that namespace and and sometimes that was the desirable effect but oftentimes we were worried that users would accidentally delete a namespace and lose production services so we were very careful initially with how we dealt with namespaces and we thought we had done a pretty good job of handling those kinds of top-level objects until third-party resources came and then went and then crd's came so for those of you who don't have a long kubernetes history originally there was a thing called third-party tpr's third-party resources they the spec wasn't quite flexible enough so they reintroduced a new version of the spec and called them custom resource definitions crd's um but both of them essentially did the same thing so if you've got a background in say relational databases or something like that you're familiar with this idea that you lay down a schema and then you use that schema to create instances of things right i create a table and then i can insert and delete rows out of that table um crd's are essentially like the schema language for the kubernetes api so you can say i want a new kubernetes type that's going to be available over the kubernetes api and i want it to have these properties uh schema languages are difficult because uh the way you manage a schema has deep deep implications for all the rest of the things on the cluster if i create a new schema and call it say ssl certificate then suddenly i've made it possible for all the different cluster users to begin creating things called ssl certificates and storing them in the cluster when i alter that schema what does that mean for all those existing instances when i delete the schema what does that mean for all those existing instances and you get into this very uh very dangerous situation where um you can make changes to this one object in kubernetes that have deep unintended consequences for the other things uh crd's ended up being a very difficult case for us because we wanted to protect people from doing things like installing that chart that defines the ssl certificate schema so team a does that team b comes along and says oh that crd's already there so they start creating their ssl schemas team a decides to delete that chart and it deletes the crd and team b loses all their ssl certificates that that's kind of like our nightmare horrifying case so we have been very very careful with how we supported crd's and uh and we've been you know well well i know people would like more active management of them we've tended to go with the most conservative possible way of dealing with it so when we did helm two we used the hook system because our backward compatibility guidelines said we couldn't add the kind of features we wanted to be able to add that would break the chart format so we waited and waited and waited and helm three came along and we introduced the crd's director and so in this new helm three style charts you can add a crd declaration there and it'll load it in your cluster but it won't allow you to do things like automatically delete that crd you have to manually delete them and all of this is so that we can prevent people from that ssl style problem where somebody installs the chart then deletes it and another team somewhere else gets uh gets hit by the ramifications of that action templating ended up being the most difficult case here because it turns out that templating is relatively stateful in the context of the chart rendering so when you render the chart for the time that you're rendering it you have a particular state that's shared across all of the different resources this crd is special because the crd has to be available before the rest of the templates can be rendered uh which meant that we had to make a very tough call whether or not we were going to allow the crd to participate in that same segment of template rendering shared memory as the rest and uh we tried several different ways of doing it and found none that made us happy and so going with our mantra we went with the most conservative solution that we knew would work which was do not expose crd's to the template engine and consequently you can't template a crd if we can figure out we the collectively including anybody who's listening and wants to open some issues and prs and stuff like that if we can figure out a good way to do that without introducing this interesting case where crd's have only access to a subset of the features or something like that then i would be i think we would all be happy to entertain that and be able to push that forward we figured if we started without templating then we left it left the possibility open that we could introduce templating as a non-breaking feature edition later in the helm three life cycle but if we had started with templating and done it wrong we would be stuck until helm four came out so that's kind of been the way we've come to it that so great question sorry that was kind of a long-winded answer but uh but i think it's a great illustration of what it's like to work in a complicated and constantly evolving ecosystem like kubernetes and still try to be as as again has been our mantra the kind of safe way of doing things the low risk way of doing things because we don't want to break existing installations and we don't want to cause our users undue frustration uh so if you do have a solution in mind um please head over to the issue queue and file something in there and we can get another round of discussions going on that well and something important oh go ahead taylor okay so sorry um something to add on to what butcher said is that really it's it's always easier to add later than remove because we're very very serious about our backwards compatibility guarantees we've only had to do we had to do a breaking i think go api change once in the go sear in the two series and it wasn't nice but like it was one change when we document it really well we try really hard not to do breaking changes so really when we try to when we approach the problems like this we we don't want to end up with another tiller situation where it was a good idea at the time and then time advances and it's like oh no that was really bad uh so we're trying to avoid doing that um that again and so um just just keep that in mind as we're doing this like when we take away something like that it's not like woohoo we're gonna make all everybody suffer it's more of a we want to make sure that we're not assuming something that hasn't even been decided on by the community yet yeah and what i was going to say and that goes right along with what taylor is saying is it's easy for us to say helm installation and it sounds like something that you could make changes to or make decisions about but the reality is you know if a bank or a government or an airline is using this we all care a lot about it working and so we want to make sure that we the open source software maintainers aren't going to make decisions that then have knock-on effects that are completely unanticipated and really negative um so this stuff is simple right this is uh not complex at all what does this journey look like for sure i know you have some thoughts yeah for sure um the best thing about being originally a helm user and then becoming a helm core maintainer um is that i was able to go on this whole ramping up story so i can also give my personal perspective on how we ramped up um so way way way back even before helm classic actually um when we were first working on this product that would eventually be deployed to clients was um we were originally using something called a container scheduler called fleet for those of you who don't know fleet was a container scheduler based on system d that was written um by the fine people over at core os and essentially what it did was you could schedule a docker container on one of many different machines and essentially it was a very rudimentary version i wouldn't say it's a rudimentary version of kubernetes but it was basically a distributed version of system d that you could use across multiple different machines um it had its flaws but it worked um and it was fantastic for what we needed at the time now when we were evaluating different container schedulers the things we were looking at i think were um oh goodness i can't even remember the names right now um but there was kubernetes there was docker swarm and then there was um mazo's marathon that was it it was marathon was the third no mad maybe was not yeah it was the fourth no mad yeah nomad came into the story late after we were doing the evaluation i think the um hashy comp about six months after we were doing the evaluation they announced nomad so this was like way way back um when we were first doing the ramp up there was no concept of a service mesh there was no concept thing so the story now is a lot more of a bigger ramp up um but for us it was really aligning our existing development story with what we were going to do um we were deploying a product and a thing that needed to be deployed across multiple different machines and we were looking at different solutions at the time fleet was the first one but then um kubernetes actually became that solution so it wasn't looking at kubernetes necessarily as a thing we were going to adopt regardless of what our current architecture was our our architecture for our product was looking for a solution like kubernetes and it just fit the bill perfectly for what we were looking to do um so that's where our on ramp story came from is that we were understanding the concepts we knew the problems that we were trying to solve and then when we were looking at kubernetes um we actually went to a whole bunch of training sessions kelsey hightower a long time ago when he was working for core west would do these kubernetes from the or kubernetes the hard way was some of his sessions we would go to his uh to the core west office over in california and we go and see these uh sessions live as he was kind of doing them and demonstrating that was one of the most helpful resources that we got at that time and then as well just having some basically throwing ourselves at the wall uh figuring out how kubernetes worked um was the big thing and that's how helm came to be really was like we were understanding from my perspective as a user we were able to give a really good um in-depth knowledge of how certain things work like how you had to create a secret before you could create a deployment because then the secrets the environment variables we populated the deployment so certain nuances and little use cases there became um things that we understood over time as we were understanding and using the tool uh day to day so that was our that was our unwrapping story um and so to really to make it less scary and to do those things i i would highly suggest uh going to kubecon going to take a look at some of the sessions looking at other people's case studies and how they were unwrapping um and seeing if that story really aligns with what uh your organization is taking a look at and if kubernetes really is the solution for your problem um and i think that's really the right way to to go about it uh there's probably millions of different ways to go about the problem but that was that was our use case and that was our um that was our story yeah one one other quick thing i'd like to add to that story is one of our core use cases when we when we whiteboard when we drew helm on the whiteboard was we wanted to make it easy for people to get started with kubernetes as fast as possible uh and we uh you've probably heard michelle uh naralee talk about this that we we had this like rails model in mind and the cool thing about ruby on rails was that you could build a blog in five minutes but then gradually kind of learn how rails actually worked and what active record was and how mbc worked and things like that uh we wanted helm to be very similar to that and so one of the ways we had always kind of hoped people would use helm was to go out there and find a chart that did something they felt comfortable with like wordpress and they'd install it and then like your like your high school biology class go dissect the helm chart and see oh so this service thing here is where i got a static ip address and oh so this deployment thing here is what decides which images to run and so we do hope that that's the way that people uh find kubernetes to be slightly more approachable by being able to have that rails experience of getting the wordpress up and running in five minutes and then going back and figuring out how everything worked love it all right we're almost at the top of our time so i'm just going to ask everyone to tell me what's surprised you the most in this helm three you know major version release journey ooh exciting or what are you the most looking forward to now or what are your zany plans and plots just something to take us out um starting with taylor tell us something taylor um i've been really excited to see some renewed uh contributions from the community around long standing features we have the all the ideas that came post render thing we have the the lookup function that's coming where you can look up kubernetes resources resource things and inject them into a template um that should be landing soon so i mean it's really cool to see all these coming up again so it's just been really fun to see that and um how helm and how having like a better factor better refactored sdk and cli has helped people with more advanced use cases awesome all right how about you fisher um from my perspective i'm really excited to see what the community comes up with um i really like to see the use cases and what people have come through so a lot of the hallway tracks that i go to at conferences or like the helm summits that we've been hosting um it's been really fantastic to hear community members or core maintainers or people who are advanced users or even people who are first getting started with kubernetes and helm um it's always awesome to hear their use cases stories what they're ramping up to what problems they're hearing um so i'm really excited to see what comes in the next coming months for how people are using helm helm two helm three what are they using what are they not um so that's what i'm most excited for all right right butcher uh what do you what do you have for us um the thing i'm most looking forward to right now is wrapping up the graduation process for helm uh to make its way as a full graduated project in cncf uh we probably could have done that a while ago but we were far more interested in getting helm three out the door and in the process of doing that we made sure we checked all the check boxes the security release the the security review for helm three was excellent it was so much fun to work on uh and that was really our last big milestone so i'm looking forward to uh dotting i's and crossing t's and just getting that graduation process done all right fantastic so we're just about out of time um if you head over to helm.sh you can check out the weekly community dev calls uh you can check out github um pull requests are very much of interest to us and um yeah karen you want to you want to take us home sorry i was muted um thank you all um thank you to the helm team for a great presentation that is all the time we have for today thanks for joining us um the webinar recordings and slides will or we're no slides it's a conversation um but the web webinar recording will be online later today and we are looking forward to seeing you at a future cncf webinar have a great day everyone thank you bye