 Hello, hey Chris, are we in either app? Hello, it's Alexus here, can anyone hear me? I can hear you. Great, thanks Erin. You're welcome. Congratulations on being, I think it was the first female technologist to do a keynote at the Red Hat Summit. That is correct, thank you. And luckily it worked. You don't want to be the first and have your demo bombed. Right, okay so I don't see Chris, who normally kicks us off, so I'm just going to start in about one or two minutes. Chris said they hear the reboot Chrome. Himself or his computer? I'm already telling you whether you slack me. Okay aren't they becoming more one of the same these days anyway? Personally, I think that's true. Hello, hello. Hello. Hey Alexus, good to hear you. Hey man, you ready? Hey Alexus. Hey there, I wouldn't say we've been waiting for you Chris, we haven't quite been waiting for you, but if you- Five minute warning, what's going on? It's good, it's good, it's good, it's fine. Take your time. Make sure those peripherals are connected. So Brian Cantrell, Brian Granton. Doesn't appear to me. He's commented on a Google Doc so it looks like he should be online, but we'll give him a couple more minutes. Quinton can't make it because he's in China. Is Ken, Brian or Sam on the line? Hey Chris, Ken too. Okay, good to hear from you, you're on the phone. Cool, thanks. And I just see Brian Grant came on, so beautiful. All right, we've got Quorum, it's five minutes past. Alexus, you want to go kick things off and get started? We've got a pack schedule, so. Thanks Chris, I'll try and be quick. I just want to say a few things. First of all, I think KubeCon was really marvelous, well done, Dan and Chris, and the whole team. Thank you very much. It was an amazing venue, really quite pleasant place to be with nearly 5,000 other people, much less stressful than Austin. Secondly, I know it's not on the agenda, but just to remind people that we're pretty close to finishing a cloud native definition. If you want to be part of that process, we're nearly over the line with a document. If you can't find the link, just email me. And one more thing before we go into the main slides, an appeal to any project owners. We're still looking for opportunities to hear from project owners, open source projects or close observers about areas where you think the CNTF can help you, the TOC especially, because this is a TOC call. If you have specific asks, send me, Dan and Chris. We really, really want to help you as much as we can. So now we're going to go straight into the slides, number six. Chris, can you just run us quickly through your highlights of KubeCon, please? Yeah, I mean, as you're aware, this is our largest KubeCon yet. There was a stupid amount of announcements at the conference. From a TOC perspective, Quintin has kind of rebooted the storage working group and is now the lead and sponsor of that. We'll hear from him the next TOC call of kind of how that's going. They've started the work on a white paper, which is great news. And there's a variety of product-related launches. The cloud events team should point 1-0. There's a variety of operators launch from all over. So it was too much to list. I just basically tried to prune what was best from the mail thread on the TOC list that I linked off there. More importantly, on slide seven, we're just looking for feedback from the community. We have two big events coming up in the air. We got Shanghai and then Seattle. So anyway, we kind of make these events better for you and the community. Please let us know. And I'd love to kind of take some time, maybe a few minutes on this call to hear from folks if they have ideas on how to improve things. We've been writing stuff down over the last week from people who have reached out to me. So consider this time to make some suggestions for our next conference. Yeah. Send them by email. Send them on the chat window if you prefer, or just shout out now if you've got anything urgent to say. I also want to just pick out from the highlights on slide six, the launch of several implementations of a cloud events implementation by, I believe, Azure and IBM, which is an unexpected dividend from the serverless working groups efforts to do some work there earlier in the process. So well done, guys. Thank you very much. Anyone want to shout out any major comments on Cloud NativeCon and KubeCon other than it's a very stupidly long name for a conference? Yeah. I mean, I've heard the intro and deep dive sessions were super appreciated by our projects, but give us ideas. We have time now, so please send them our way. One thing I heard a lot of that might be worth people thinking about is that with the growth of Kubernetes and the other projects in KubeCon, we're starting to speak more to end users, and it's starting to feel a lot more like a normal commercial software conference, which in some ways is amazing, but on the other hand can lead to sort of the loss of the original sort of developer community vibe. It would be great for people to suggest ways that the conference or other conferences could continue to sustain a developer-to-developer interaction as opposed to a user-to-user or developer-to-user interaction. I think that this is important. Otherwise, it will turn into a different kind of conference altogether, which would be a shame. All right. Let's move on to the slide number 8, the working group process. Several people have come in. Hey, Alexis. I was muted there. Yeah. Could I just jump in and say that I agree that the developer is a key strength in that we really do want to avoid that it just be vendors talking to each other, as a repetition from some other conferences, or just business pitches. I think the biggest strength that we have going for us is the co-chairs that they come from the community and that they pick a program committee from the community. That's an independent rating of these talks. And so it's not Chris or myself or the board or anyone else that's actually selecting the talks. It's a relatively neutral processor, as much as these things can be. And so I do just want to mention that the CFP for both Shanghai and Seattle, and they'll be integrated together so that you can pick one or the other or both, is going to open on May 21st. So we would love to have people begin thinking about the topics they want to be talking about this winter. Okay. That's all. Thank you. So slide 8. This is about working groups in general, which covers a number of different issues. Just to be clear, a lot of people have said, and I myself feel strongly that the working groups were created to fill a need, but now that we've got a couple of them underway, we're starting to have requests for greater clarity around their shape. And I think it's very important that each working group has a clear mission and exit criteria. The working group process that Chris is chairing here is supposed to formalize that. So the output of this will be a written document with guidelines for creating a working group, shutting one down, and making sure that people work on it, having a clear mission. And we've already had people contribute. Chris, is there anything else we need to say about this? Sure. Just final call to any feedback on this. Matt Farina and another community members have given feedback. So I'm going to crystallize that into an updated draft soon. So check the pull request, but please get the feedback and by the end of this week, I'd like to finalize this soonish because we have a couple of working groups that want to be proposed and it would be great to kind of have them go through this new process. Thank you. So I raised the issue of the birds of the feather groups and the suggestion that came back was to use the work group. But the difference I see is that the birds of the feather groups would not have specific exit criteria. So does the establishment of things like exit criteria change anyone's mind on whether we need a different kind of group? I think the most important thing is that expectations are always clear. I personally don't have a strong preference either way for different types of groups, so long as people know what they're doing, otherwise they drift. Yeah, I think exit criteria could be refined over time too. So I think we see the serverless working group doing this and I think they're going to talk about this later today. Okay. Yep, that's it from that. Thank you. So slide nine, project proposals. We are building up quite the log jam here now around new projects coming up. I'm happy to see that a lot of these are sandbox projects. Just good. We'll probably spend less time on those and really drill into the incubation ones and try to have a high standard for incubation. Remember that if you get into the sandbox, you don't get all the press and marketing. It's just basically an experimental. The, Alexis, the final thing for telepresence since we discussed it last week is I believe you and Camille have volunteered to be the TOC sponsors. Was there anyone else before we kind of finalized this because it's an action item from the last TOC meeting? They've submitted a proposal officially too that I linked. I do recall Camille offering to sponsor and I'm happy to sponsor as well having spent quite a bit of time with the team. Cool. That's enough for me to move forward then. Okay. All right. So cloud events and serverless. That's the next section. Is this a presentation? Yeah, this is Doug. This is a status update and as well as a sandbox proposal for cloud events, if that's okay. Go for it. All right. Thank you. So jumping to slide 11 just to refresh everybody's memory and how we got to where we are. Back in March of last year, Ryan Scott Brown kicked off an email discussion about what should the serverless or what should CNCF do relative to serverless, if anything at all. As a result of that discussion, the TOC agreed to form a serverless working group to do some investigation, figure out what if anything should the CNCF do with serverless, what exactly is serverless, what's its relationship to the other as is out there, stuff like that. In November last year, we presented our outputs, which is a white paper defining everything I mentioned there, you know, what is serverless, what is use cases, common architecture, stuff like that. I had included the landscape of the community out there. So projects, tools and services, and we keep a spreadsheet up to date as best we can about what's out there for people to use without saying which are good or bad. It's just sort of a statement, the fact that what's out there. And then most importantly, we had a set of recommendations. What should the CNCF do going forward? Things like education, look for other CNCF, look for other serverless projects to pull under the CNCF umbrella. And in particular, what areas of quote, harmonization should we focus on? With the first one being our recommendation to look at the eventing space. And so that's what the TOC agreed for us to look at back in November. So that started off this whole cloud events working group or a subgroup under the serverless working group. And so we started that work back in December of last year. And then moving on to slide 12, where we are right now. We have weekly phone calls where we get around 30 people joining each call per week. I think that's fairly high. That's a good indication of the popularity. We have 37 different companies who have been involved at one point or another where I would consider 15 regulars. And by regulars, what I mean by that is, if you attended the last three of the last four, I'm sorry, if you attended three of the last four phone calls, then you have voting rights. And that's what I consider a regular person or regular company on the call. And so I think 15 regular companies attending a phone call every week is pretty good. As Chris mentioned, we released 0.1 of the spec back in April. So about a month ago. And just for clarity, unlike other quote, eventing specs that may have been produced in the past, we aren't necessarily trying to define what all events look like. That's been done before we're not trying to do that. What we're trying to do is just define the common metadata or envelope that goes around your data. So if you actually look at that little snippet of JSON in there, you can actually see what we did is in the same way HTTP just defined some common headers or piece of metadata that people can count on being there without having to understand the payload itself. That's what we did here, right? We defined some common attributes to help the infrastructure routes the message more than anything else. But then the data itself, the last property in there is where the event data itself goes, and we don't even touch that. It's just a blob, basically. And so we're staying out of that business. We're here just to help the infrastructure get its job done for the most part. So along with the envelope, we've defined an HTTP mapping as well as a JSON mapping. And we do have others that are in the works. And the next thing we're looking at in the future is, okay, once we, you know, now that we have this sort of not necessarily behind us, but we feel like it's along the process, it's all on a pipeline well enough that we can actually start thinking about what comes next. And so we are looking at what next project we want to work on in this quote, harmonization area. And so we're having some discussions this week about different things we're considering, things like function signatures, can we get some harmonization around orchestration of function and stuff like that. Having decided yet, still up in the air, but that's what we're going to be looking at next in terms of, you know, next things to look at. Site 13, some of the highlights in terms of what we've been doing. As I mentioned, we have a large cross-bender team working on this stuff very, very quickly. I think we made a lot of good progress really, really fast. Like, got lots of credit to the members of the working group because we've had lots of ideas tossed around and everybody seems to really be anxious to come up with a consensus as quickly as possible. And so they work very, very fast. And I congratulate them on that. As was mentioned, we've got lots of good coverage at KubeCon.eu. We had Kelsey mentioning us on the main stage, which is great. We had a serverless working group, I'm sorry, a serverless session by Austin, where he actually showed a demo of, I believe it was 11 different companies involved in producing or consuming cloud events to show interoperability and that this thing is real, which I thought was really great. And it went over very, very well, I think. We had some interviews going on at the cloud events at KubeCon as well. And in terms of milestones, we actually exceeded our expectations. I think we were hoping to just meet our 0.1 milestones by KubeCon. We actually achieved all the way up through most of 0.4 milestones. So we made a lot of good progress really, really quickly. In terms of challenges, these are sort of two different categories here. One is the challenges for us is just agreeing on that basic set of metadata. Obviously, we have a lot of people who are very opinionated in the group and they have lots of things that they want to get included. And so it's been a challenge for some of us to pull back and say, okay, really, what's the minimal set that we need just to get our job done without trying to boil the ocean? And it's been a challenge for some of us, but I think we've come through for the most part. But we're not done yet, though. The other challenge is more for our user's perspective in terms of they need to understand that we're not doing what people have done in the past, which is defining a common event format for all events. It's just the metadata to help the infrastructure get its job done. So people are still going to be expected to write the code to process the event itself and whatever data format it comes in. So people need to understand what our goal is here, and we're not trying to boil the ocean. So slide 14 is for the next steps. Obviously, we're trying to get to 1.0 as quickly as possible. And we want to increase the number of vendors who are using it in production. I think right now we only have one who is officially announced official support, I think that might be Azure. But as you can see from the list here, other people that are have it in their open source project or will soon. And of course, we also want to propose cloud events to be a to be a CNCF sandbox project, which I'll get into next in all that slide. So we do actually have a proposal out there. There's the link is on the previous slide slide 14. A lot of the stuff I mentioned there in terms of status and stuff like that is already in that pull request, but a couple of things I wanted to mention that I didn't talk about already. We are working under the Apache Version 2 license. The governance model we're following is very much consensus driven, where we just try to get to, as I said, to consensus as best we can, come to agreement, get everybody on board. But if something happens and we can't actually come to agreement, we do have a process in place for how to resolve that and basically comes down to a vote. And that goes back to that those voting rights that I talked about earlier, where if you are from a company and you've been there for the last three or three, four of the last four phone calls, then you get a vote. Luckily, I think we've only had one vote the entire time and it was something kind of silly. It was, is it going to be cloud events.io, cloud events.org or dot com. That was it. At all technical decisions, we've been able to come to a consensus on stuff. So that's been really, really great. We have another group that broke the voting rules. We have your standard things, mailing lists, slack channels, as I mentioned, weekly Zoom calls, issues in PRs are all tracked in the GitHub repo. Why should we, you know, join the CNCF or be a formal project under there? Obviously, because we were founded from the CNCF, all of our members already share the goals of the CNCF, meaning cloud-native promotion, providing choice for the consumers, interoperability, and portability of infrastructure or of your code. From our point of view, or at least from my point of view, I think we need to be under CNCF mainly for what we get out of CNCF, meaning from the members of the CNCF. We want this thing to be as successful as possible. And so we need a broad base of people to look at our work, evaluate our work, and use our work. And the CNCF, in my opinion, provides the best opportunity for that, because of the wide range of people, not just the vendors, but the user base, the consumers, all the various people who actually we have access to to help get their feedback on the spec, because we want this thing to be as successful as possible and be as widely used as possible. And most importantly, to make sure it doesn't become vendor-specific anyway, even though we have lots of participation from other companies, we need the verification that people actually can use this as many spots as possible. So with that, I know everyone kind of quickly, but I didn't want to eat up the entire time, because I know we have other projects we want to talk. Are there any questions or comments? I have a question. So am I right in thinking that cloud events is an implementation, which is something that could therefore be supported by, for example, OpenWisk, correct? So cloud events is just a specification, not an implementation. It's just to find the metadata of, if you go back to slide 12, it just shows the metadata and decides how that metadata looks on the wire. We do have many implementations out there. So for example, as you mentioned, OpenWisk does support this as well as all the other projects that are mentioned on slide 13. If you look at the speakers section, you can see all the various companies that do support it. So what's the situation with Amazon Lambda and OpenFaaS and the serverless open source project, which are all very popular and have different purposes and roles and implementations? I believe they were involved in some of the conversation around serverless working group. You anticipate that they would be interested in this, or is this something that's separate from them? Also, Pivotal Rift is another example. Yeah. So all the companies that you mentioned there, with the exception of Amazon, regularly join the phone call. So that's an indication that they do see value and interest in going forward with this. Amazon in the past has joined our phone calls. They've dropped off since then, and we do have some people with action items to go off and figure out why. Is it because they don't see value in it? Is it because they're busy? We just don't know. And so we do have some people putting out some feelers to get that information. But everybody else that you mentioned is definitely part of the working group on a regular basis. And I believe almost all those people, actually, I'm sorry, except for OpenFaaS, I believe all those other people were involved in the demo. So they're putting their coding effort behind this as well. Actually, OpenFaaS was running under the hood of dispatch. Oh, cool. I didn't realize that. Okay, there you go. Maybe another point, Doug, it's your own. You can actually take AWS Lambda and create a very thin shim that will translate it to OpenEvent. So even if the Cloud provider doesn't provide the native implementation, it still has room for providing it raw. Yes, thank you. But to answer the question that Alexis is posing, with respect to a lot of the phases, the CloudEvent can be passed straight on through as event data, and then it's up to the function to be able to deal with it. There are other implementations such as the event gateway or dispatch where I believe we're both using CloudEvents as a common substrate to be able to use that as a middleware routing function to be able to transit the event to the actual consumer. And so there's different producers, middleware and consumers occurring that will have different operations on the event data. Did that answer your question, Alexis? I mean, yeah, I just think it is very important that we keep that in sight, that we don't end up with the kind of island of open source projects that people don't use relative to a much more popular kind of mainstream serverless implementation. That's the basis of my concern. And we saw something like this happen with the API discussion on OpenStack a few years ago, you may remember. So I'm just sort of keen to kind of remind people of that, but it sounds like you're aware of the issue. Yep, definitely. And we're doing our best to try to address them or address the concern. Yes. Doug, this is Arun. So let's talk offline, and I'll be happy to kind of come back with positioning from Amazon perspective on CloudEvents. That'd be great. Thank you very much. All right. Any other questions, comments? Yeah, just in terms of process, you know, they will be looking for TSC sponsors. So if there's TSC members on the call that are interested in sponsoring this, please speak up now. Yeah, Chris, I definitely want to see Ken. Yeah, I'm happy to go sponsor that. Okay. Ken and Brian. Well, thank you. Definitely. Very nice. So is it match for Helm? Yes. Hello. Good morning. Welcome. Go for it. So slide 17 is we'll start off. Helm is a project seeking incubation to become a part of the CNCF. It provides package management for Kubernetes. Like most platforms, think of apt for deadening. You don't download MariaDB and then go get your configuration files and then tell it how to start up and go through all of those things. Instead, you install a package typically you can act. And then you can depend on that by other applications. So you have your dependencies and it can be well maintained and versioned and other tools can be built on top of it, like Ansible and Chef and things like that, that can leverage this. And that's what Helm brings to Kubernetes. It brings package management for our applications. And it can be used as dependencies and so forth. If you go to slide 18, you'll see that the project we have Helm and we have several what we call sub projects here. This term is borrowed from Kubernetes. There's the Helm package manager, but we have a little bit more than that that we're looking to bring in. We have what we call the community charts, which are packages by the community for people to install. Then we have Monocular, which is a user interface, which can it has two parts. It can be run in cluster and it can be run as a directory to search for things, either privately or it's most known for being the original UI for kubax.com that lets you search through the community charts. And then more recently we have Chart Museum, which provides a repository service, something that you can push charts to and pull them from and interact the same way you can with something like NPM or Docker Hub, although it's meant to be run by many different people, not as a central repository. And so when we talk about Helm, we're talking about all of this, so it's the package manager plus some of the supporting software that we're using today. If you go to slide 19, we can look at a little bit of the history here. So Helm was started over at DEIS and over six months back, this is back in 2015. Over six months it saw lots of rapid development and then it was invited to merge with deployment manager, which was part of Kubernetes at the time. And it merged with deployment manager to become Kubernetes Helm, and that was very early on in January of 2016. And then shortly thereafter, that march is when Kubernetes became a CNCL project. Over time, Helm 2 was finally released, which was the merger of deployment manager and Kubernetes Helm, plus a whole lot of things that were learned over time. And that was released in November 2016. Then over 2017, we saw lots of growth with minor releases and patch releases and lots of growth in usage and things like that that I'll talk about in a minute, and in people contributing. Near the end of 2017, we started to talk about Helm 3. A lot had changed in Kubernetes. There were things such as custom resources that weren't there when Helm 2 was being developed. There were changes in usage patterns and what people wanted. We learned a lot and Helm uses semantic versioning. And so we couldn't change APIs and break backwards to padability, which means to add a bunch of the features and make changes we wanted to do that. It would mean it's time for Helm 3. And so at the end of 2017 is when we started talking about what are we going to do there. And then in early 2018, in February, just a few months ago, we had our first Helm Summit, which was a two-day conference focused solely on Helm. And it was the official kickoff to Helm 3. If you look at side 20, we'll look at the community charts a little bit, because I think this gives a little bit of insight into usage. Back in December of 2015, there were just seven charts. But by the time we launched Helm 2, it had grown to 27 charts. Once we launched, there was a massive take-up in charts. And at KubeCon EU in 2017, which was just four months later, we had 2.5 times growth in charts. And then that growth, it teetered down a little bit to KubeCon US in 2017, where we had 143 community charts. And just the other day, at KubeCon EU, I took the date before that we had 203 community charts. And it's continuing to grow. And the tapering off effect actually isn't interest. It's scalability. And one of the problems we're working on is how do we scale sharing charts? Because right now we have the community repo with lots of people wanting to contribute updates and changes and fixes along with new charts. And we have to change up how we do things in order to scale that growth. That's one of the things we're looking for help on and we're already getting into right now. If you go to slide 21, we can see a little bit of the community contributions. Right now there are 11 Helm maintainers. That's for the Helm package manager code base. And they represent five companies. And we've had 328 contributors. For the community charts, there are 11 maintainers from eight companies with over 800 contributors to the charts. In addition to that, Chart Museum and Binocular, some of the other stuff we've touched on, those have additional maintainers who maintain and manage those code bases. As far as it goes, I went in and grabbed from DevStats and some other places, people who've contributed to these projects. And we have a lot of folks here. In fact, the first 12 on here are Platinum members to the CNCF. And many of the others are also CNCF. Members. There's just a lot of people contributing from a lot of organizations right now. On slide 22, we'll get into some of the community statistics. We've been running some statistics over time. And Helm is downloaded by over 59,000 unique IPs per month. We've been able to track that over the last couple of months. Last month it was 59,050. And that doesn't count things like Homebrew or other places folks might install it from such as Ubuntu snaps or something like that. Those are additional. In April, we saw over 11,000 installations using Homebrew. In Slack, we have over 4,000 people in the users and close to a thousand people in Helm Dev. The users are people who use Helm Dev is what we're actually discussing how we're going to develop Helm 2 and Helm 3 and features and things like that. On the community chart side, we're able to track some of the details. And we have over 12 million get requests for individual charts per month. And that's from multiple tools. So folks who use Helm will download them in order to install them. But people like at JFrog and Artifactory will download and use them as well. In fact, when we talked about Chart Museum being a server side thing a moment ago, Chart Museum isn't the only one. The way it works is there's a static implementation built into Helm itself to create a static repository. Chart Museum provides dynamic. It's all specifications. So folks like Artifactory from JFrog are able to do it as well. So Artifactory pulling in lots of these community charts and having ways to use them is ways that actually supports them as well. And in the charts where we discuss developing charts, we have over a thousand people in the Slack group. If you go to slide 23, we have Helm and the Kubernetes app survey. So we did an application. How do people use applications with Kubernetes? We did a survey and we've been pulling data from that. And so from here I grabbed two things that I thought would be useful. One is that 64% of people who shared the tools they were using were using Helm. And it was higher than any other tool that we had out there that was not part of core Kubernetes. And then more than 78% of the people are using third-party software, which is something that can be distributed via Helm and charts. This could be proprietary, the majority of it, 77% as folks using open-source software that can be packaged and shipped and depended on using charts in their applications. But there is a lot more detail that anybody wants to give in the survey. So why the CNCF? Helm is part of Kubernetes today. And so why the CNCF, if it's already part of Kubernetes out of the CNCF, and one of those things is that it has grown and it's grown up. And it's become a rather large community as a sub-community of Kubernetes. And as Kubernetes focuses on core Kubernetes, Helm is now at a place where it can use more to help flourish. And so Brian has offered to be our TOC sponsor. It's Apache 2. It's already been under Kubernetes for quite some time. For governance right now, right now each of the projects has their own sets of maintainers, whether it's Helm or the community charts or Monocular. And when it comes up, it's been coming up to the next level to the Kubernetes SIG apps. In the absence of that, we are looking for some guidance on maybe doing a steering committee and figuring out how do we do that to govern all of it as a whole. But Helm has now grown to the point where we can have our own conferences. We successfully had one just a few months ago and we're looking for help in organizing those things because some of the core maintainers did that and that was a tax kind of looking for help from people who know how to do this far better than we do. Help with governance as we grow the community. We need something like the CNCF to ensure vendor neutrality because we've got so many people from so many competing companies and collaborating companies that having a neutral place for them to come collaborate together in a safe place is one of those things we just know we need. And then there is working with the chart infrastructure and figuring out how to scale that in a way that benefits the whole community. Well, those who have to consume charts, those who want to depend on charts, and those who want to share their applications via charts and make them discoverable. And so this is some of the reasons we're looking for CNCF. And so with that, are there any questions? Hey, one thing I've always wondered about with Helm and came up recently in the Helm 3 conversations is what's the scope of the project? What's the roadmap? I think that's pretty important because one of the concerns that many people have voiced to me in personal conversations is that Helm seems to sort of grow and grow and try and do a lot of different things at once. Ranging from templating to dependency and package management. I worry that if it tries to do all of those things, it will fail and collapse under its own complexity. So what's the thinking around that? So there's two things for it. One, Michelle just dropped a link into chat here that talks about where we're going with Helm 3. That will give you some detail on what we're planning, technically speaking. Helm though, and I'll see if I can share it, Helm is focused on the package management, fortunately. Think of it as akin to homebrew or apt for Kubernetes as far as trying to compete with Ansible and Chef and going up to those higher levels. That's not its purpose. There's recent blog posts that talked about where it was compared to other projects. Just a moment, I'll share that. Because the goal is not to try to take over the world but to try to solve the package management problem and build it in a way that other tools can interact with it. In fact, that's one of the things we hope to do more of with Helm 3 is make it easier for other tools to interact with Helm and be part of the process. And so this post that I just shared will talk about where it fits compared to configuration management and other things, including things like operators. But it's really attempting to be something akin to apt or homebrew for Kubernetes and not more than that. And we're encouraging others to build projects that use it, that want to be more and use that as a component of their service. Does that include templates or not? Yes, templates are part of package management. If you look at how apt and does things, there is a template as part of that. We know some folks want to do more with templating and go beyond, go TPL. We've been exploring other ways of approaching that and right now it's mostly about collecting data. Are people happy with it? If not, what are the issues? What do they want to do? One of the things we ran into in the past was so Helm has the ability to use other engines today that go beyond, go TPL. But we've had trouble getting people to uptake on that part of it. And so we're attempting to bend over backwards to make that easier to use. Or having a hard time having people who want to go and do more than that. And that's one of the things that we're struggling with. So, for example, with caseinet, there was a proof of concept created a while ago by somebody who's not a caseinet user, just to show that it could happen. And we had trouble getting anybody to take that and turn it into more than a proof of concept who was a user. And so while I know people talk about wanting other template engines and there's things you can do today, we're having trouble getting uptake on using other templating things. And yet in order to make a package manager that'll work with Kubernetes, we need a certain level of that templating capabilities. Do things like, if you install MariaDB on Debbie and you use app, it's going to ask you for that group password. We need the ability to pass that in when you spin it up. There's certain things like that we have to do when templates provide us with some of these capabilities. It's, we're continuing to try to make it easier for people to use their own templating engines, but at the same time, we're having a hard time getting everybody to take us up on it. Yeah, Alexis, this is Brian. I think, you know, Helm, by depending on how you slice it does five to 10, provides five to 10 different pieces of functionality, but I don't think it's growing in terms of scope. It's always stayed really close to that original vision of being an operating system package manager that distinguishes it from other types of package managers, like language package managers. But, you know, having that basic functionality of searching and browsing packages, managing dependencies, installing a package, life cycle hooks, things like that is very closely modeled on the OS package managers, and I don't really see it growing beyond that. I do suggest that, you know, now is actually a really good time with the Helm 3 proposal there that was posted in the chat and development starting for people to get involved to help shape that. Helm has grown, as Matt said, quite substantially as it's been kind of incubated within Kubernetes and effectively it has its own distinct community and governance and technical conventions and things like that, and it doesn't really... It's kind of outgrown its home within the Kubernetes project and that's why we're looking to kind of break it out to become a top-level project. I mean, that'll make sense. I'd really love to see really precise statements if possible here on the intended development rails because it just helps other people who are trying to integrate with Helm. It would be nice to understand what is seen as the kind of major technical gotchas in the project, which are other things that the community's happy to live with or would like to improve around atomicity, determinism, and compositionality. I mean, we are very happy using Helm at Weaveworks, but we'd love to know more about what the direction is. So that's all really... It's really a request. And yeah, I love the idea of moving it out of Kubernetes. I think it's definitely grown. Would that also mean that the SIG apps would not be so much the Helm thing, but it would be more of a general group around Kubernetes apps? Yeah, we've shifted in that direction for some time. Helm is a sub-project, but Helm and charts have had their own meetings and their own space. For the most part, the SIG apps meetings have not been Helm focused. It is brought up in conversation, like many of the other projects are. Could you elaborate a little bit on why Helm should be separated from Kubernetes? Because I mean, everything about Helm is Kubernetes-specific, correct? From a technical perspective, there is nothing that is neutral with respect to Helm and Kubernetes. Is that correct? Yes, and I think Brian touched on one of the big points here is that, well, there's maybe two points here. Brian touched on one of them, and that is Helm is culturally growing to be differently. Helm has kind of grown up, and like a child, their personality is a little bit different from the parents. And so Helm has its own personality that's just separate. And it's also- Could you do a little more specific about that? It'll maybe give some examples of how, I guess I wonder if Helm is entirely specific to Kubernetes, does that represent a divergence that potentially is going to operate across purposes with respect to what Helm wants to achieve? Yeah, so Kubernetes is mostly focused on the core Kubernetes project, which is about building container management. And Helm kind of sits above that in operating applications and starting them up inside of Kubernetes. And so the people who build container management system are different from the people who write and manage apps in that system. And we just see cultural differences between them. In the same way we want to use tooling to merge patches, the way we want to use tags, the way meetings and governance work, we see a difference. And where maybe Kubernetes is more ops and dev ops, we see kind of the dev and app side of that being more in the Helm space from a culture standpoint. So Brian touched on a little bit of that. There's things in how we want to use tooling, how we want to do process, that people on different sides of that divide would like to do things a little bit differently and to give them that freedom. And then the other thing, which I think is more important is that Helm is growing up and it's become its own thing in many ways. We can have our own, we had a successful summit. We've got lots of contributors who do nothing with Kubernetes, but they're all around Helm. And we'd like to optimize process and things to help grow that community of app developers and app operators who are using Helm to enable them. And that's a little bit different than the other. And so it's more about giving Helm the opportunity to continue to grow rather than being part of Kubernetes and fitting within its structure. So it can grow naturally the way it wants to. But to be clear, I mean, Helm's growth is limited by the growth of Kubernetes. I mean, you don't have any aspirations to run on anything other than Kubernetes. Helm is in some respects, and it started out as a fairly small percentage of Kubernetes users. We're using it now as Matt showed, a fairly large percentage of Kubernetes users are using it. So and its growth rate is maybe limited by Kubernetes, but it's to some degree independent. Just to use an example, Cubeless is a functions project. It only runs on Kubernetes, but it's not part of the Kubernetes project. So our basic stance is not every project that is Kubernetes specific belongs under the umbrella of Kubernetes itself. We have to draw boundaries for what's in the core project and what's outside the core project, and Helm actually falls outside of that boundary. Maybe you can elaborate a little bit on that, Brian, because I mean, I guess, so the reason, a couple of reasons why I'm asking. One, I just want to get clarity that Helm has got no aspirations beyond Kubernetes, right? Is that correct? I would say at the time that is correct. Okay. So it is effectively, it is an aspect or a feature of Kubernetes with the caveat that there may be, there are other ways to doing it. From as somewhat of an outsider to Kubernetes itself, looking in, one of the greatest dangers that I see for Kubernetes is the growth of complexity. And one of the concerns that I have, and Brian, obviously I want to hear your thoughts on this, but that if we, clearly, there's value in allowing a separate community to express different values and so on. But I get concerned when there's too much divergence that to those coming up to Kubernetes, it becomes overwhelming with complexity because there seem to be so many different aspects and there's almost too much choice. And if there's going to be, I mean, I guess one question is, is there going to be an alternative to Helm for Kubernetes or not? Because if there isn't, and if Helm is going to be the way that we run apps on Kubernetes, I think I might argue that it should be part of the Kubernetes project. And if it is, if there are going to be other ways of doing it, if Helm is not going to be the only way of doing it, I definitely wonder about the consequences for complexity and complexity out there. Is this a very concern, first of all? Well, I think it's, you know, the complexity has been mentioned and by a number of people, and there have been a number of recent threads about it. I think it's a legitimate concern. Kubernetes effectively is a portable cloud platform, right? So it has about 60 APIs that cover things ranging from running containers to load balancing to authorization policy. And we're growing more and more kinds of policy. In fact, as enterprises want more controls over what things run. So we are trying to bound the scope of the project and take a hard look at what needs to be in the core. Effectively though, you know, we have to draw the boundary somewhere and wherever we draw the boundary, that's going to create some amount of diversity outside of that. With respect to configuration and deployment, I think that that's an area that's always been fairly fragmented and there are a lot of different opinions about how to do things. Some people want to use a pass, some people want to use functions, some people just want CICD, some people want to use something more like a configuration management tool. Like I talked to some customers and they just, they like Ruby and they don't like Python or vice versa. So I don't think there's going to be a one size fits all about this. I have a 25 page document about that. I could post a link in. There are alternatives to Helm today and I think there always will be. You know, Heptio has developed a tool called Case On It which is inspired by some tools developed inside of Google. So I think this is an area that's still developing and people are still experimenting. Hubpack is an example of a project that's just focused on package management and it's inspired by Go, Git-based style dependency management tools. I think that's fairly promising of an approach and it's completely orthogonal to templating or lifecycle management, for example, where there are other interesting projects like operators that are trying to address the application lifecycle management space. So I think this is an area that's still under very active development and I see that as a good thing as opposed to a bad thing. Sure, and then it helps answer the question that you don't view, in the limit, Helm is not going to be the only alternative. You think that there are going to be alternatives to Helm and from your perspective, that's a healthy thing. Yes. Okay. Yeah, but that answers the question. I mean, it does lead me with the complexity concern because I think that there's a very real risk of complexity death. There's just so much and there are so many different dimensions of freedom, but ultimately it's always going to be a balance. But that answers the question. Thanks. Yep. Thanks both, Brian, for that. And thank you very much, Matt, for taking us through the proposal. I think we're done on the bulk of today's meeting. Is there any networking quick update from Ken, I believe, to come when we're done? Great. Hey, Alexis, thanks. So networking update would be pretty quick. And we'll have to kind of a slow start on the networking workgroup this year. We had a couple of good meetings and started to make some progress. And then I think people just got busy and wanted more to make some additional follow-up meetings. And so I definitely need support. Is there anyone on the call today that is part of the networking workgroup? But I would definitely use your support in getting some attendance and driving some discussions in the networking workgroup. There are some decisions that we do want to make in the coming meeting if possible, one of which is going to be which of the networking services that we've identified already that we want to try to tackle and what would that look like with the new workgroup model that Chris and then were putting in place. And then there's a desire to start looking at projects, as you mentioned Alexis earlier, that we could find some example networking projects in this area of the services we're looking at that are needed for cloud-native application deployments. We want to try to reach out to those projects and offering to talk them about what they can do as part of the CNCF. Any questions on that? Anyone? Okay, our next meeting is next week, Tuesday, at the same time as this, which I think is eight o'clock specific time, 10 o'clock central. Great, thank you. Thanks, guys. I think that brings us to an end for today. Chris, anything else that we miss? No, we're all good. So I'll do the vote for home next week. Just leave it open for people. I have a quick question. I'm talking who is it? Sorry, Erin, I had a quick question about Helm. I thought when we kind of switched to the new model, we needed two TOC representatives for, is that just for? It's for sandbox incubation level projects, which Helm require one TOC sponsor in a formal vote where you need two-thirds super-majority plus one from the TOC to get accepted. Okay, and so for sandbox, we wanted two for the cloud events proposal, and I thought only Ken was on that. I didn't see a second sponsor. Believe Brian Grant is the other one, so. Okay, I didn't see it on the proposal. Yep, I'll take a look at it, so don't worry. Okay, thanks, yep. Thank you, Chris. Thank you, Erin. Thank you, everybody. I think it's good going. Cool, take care, everyone. Thanks, you too. Take care. Bye. Bye. Thanks, Mike.