 Oh, right. I see slides. I see things. I see a very quiet group this morning, more or less. So we'll just go ahead and get started. We've got quite a few things to be able to wander through today. So we've got our standard agenda. We've got some notes coming in from KubeCon. We've got our SIG updates, and we'll be having some presentations from Chubeau and Operator Hub. I am now playing the role today of Liz Rice, who is currently out. Yeah, some notes from KubeCon. Wanted to be able to highlight that we had 11,000 attendees here today. And big, big important thing, as Chris has noted over in chat, December 4th, that's the deadline for our Amsterdam pieces. Chris, anything on that? No, CFP closes tomorrow. And just opening to people to make any comments, thoughts, suggestions on how to improve KubeCon next year. So we're kind of open to suggestions here. There's also a survey that we asked people to fill out, but we could use a few minutes on this call for some feedback. Yeah, I'll also be watching chat to see if there's anything in there. And that survey is also linked into these slides. So happy to be able to have people look at those from there. We can move on. Coming up, we've got our KubeCon that you've seen. And for those that are in re-invent, we want to be able to let you know that we do have a networking details over on the Twitter account, as well as the slides here. Please RSVP so we know you're coming. The last pieces in here, the Kubernetes forums, they are coming very, very soon. And one note, we do still have India sponsorships available as well as the opportunity to be able to do co-located days for the India events as well. So please reach out if that's something you're interested in. All right. Other pieces in here as far as housekeeping, we have current votes on the table. We've got a Falco vote. We've got a SIG network vote and the tough graduation vote. So these are all currently open. If you want to be able to add both binding votes from TOC or show a support votes, non-binding votes, please go ahead and drop those in the mailing list. All right. All right. So, hey, update from SIG app delivery here. First, we had our first two sessions at KubeCon on an introductory session and a detailed session on the app delivery model, which we have been working on. This was very well received to the feedback from the community. It was very good to have that structured approach and we got some additional questions around it. Very notably, we also asked the audience what they would be mostly interested in learning more about. One key aspect really was here about Kubernetes for air gap environments as we got some feedback out of the telecommunication industry. So the main question is basically how can I run my Kubernetes environment and which also obviously ties into the whole application packaging topic if I have an environment that has no access to the internet. Tomorrow we will have a presentation from Microsoft and from the CNAP folks on best practices there. But this was something that came up several times during KubeCon how these situations can be handled. And what the problems is, which is I think something we should follow up further on and develop as practice out of giving them with some work there. Operator definition. This is still work in progress. So we were asked by the TUC to come up with a definition around operators and what we want to do there. This has been started already, but we will have a first discussion tomorrow and then get back to TUC with the work in progress. On the project evaluation, so we have a couple of evaluations that are pending from CICF delivery right now. This is just a friendly reminder that we would like to have some input on evaluation guidance and due diligence documents which we could use as templates here. I think that's where we kind of stuck somehow is what is the ideal format to present this back to the TUC. This is where CICF delivery is still looking for guidance on how we should best handle this. So everything that the TUC can share with us is highly appreciated here. So whether it's also past evaluations, whatever is available. During our KubeCon meeting, it was mentioned that there is now a template that has been or isn't the process of being created. So please, please share this with us so we can use it also here. I got your message. Sorry. Yes, I've been away out of office for a while, but I'll send you all of that material that you requested. Yeah, thank you. I was assuming that given the Thanksgiving holidays, you might be out. We expect almost a feedback once you have these documents that can provide more just on some projects we have that have been that have presented some feedback here. We would also have a more detailed discussion with the TUC about the Argo project, which is currently under review. The question here is, or the challenge here is that looking at Argo, you look, looking at Argo CD, Argo events, Argo workflow and those projects. It's actually not a single project submission. It's a submission of multiple projects and also these different projects have also different maturity and community adoption. So this was more than intended. I was interested to see an open discussion how to handle situations like this where somebody not just proposed like a single project, but a bunch of them all together. The operator framework is actually a nice coincidence that we have later on the discussion about the operator hub. There was just a question whether we should split up the operator framework as a project from the operator hub discussion simply because the operator framework obviously is this project to build operators. The operator hub is coming with its own infrastructure requirements. There's more around that the hub itself, like whether it's accepting other operators, how to handle these things, and also the cost obviously of running it. But as we have a presentation on operator hub today, I would defer here for presentation later on. And regarding the command operator in this case is that the Kubernetes operator, not the person who's operating. And the question on kudo, I think regarding kudo that he will see is already engaging with the kudo folks. There's no way to write down the feedback from the gap delivery. But that is one of the projects that need to go through SIGAP delivery as part of their proposal to the TOC and the CNCF correct. That's a delivery project. Yes, it is. But we also talked to the TOC and the TOC also already provided feedback so the TOC should already have everything that's needed or that's then I think they already reengaged with the kudo project. I'm also going to talk a little bit about that later on when we get to the operator framework. Okay, that's it from SIGAP delivery for now. Thank you. All right, I think we can move on to Alex. So SIG storage was quite busy at KubeCon. We had a well attended session with a lot of the leads and co-chairs were present. And it was really good to have a number of people after the session to show interest in working with us and contributing both to use cases and some of the white papers that we're building, which I'll talk about in a second. We also had a co-located event, the Climbing Storage Day, which wasn't technically a SIG storage event but had a number of SIG storage members contributing and working on this event, which had sponsorship from a number of different organizations, but also a lot of inputs from a lot of end users of storage projects and products. So we're kind of now going to potentially discuss seeing if this should come under the auspices of the SIG storage team and maybe help turn it into more of an official CNCF SIG storage event. But it's only the most earliest of those discussions, so we need to discuss this on our next call. And finally, thanks to Amy and some of the marketing guys at the CNCF, we now have a logo, which we're probably more excited about than we should be, but it is kind of cool. Can we move on to the next slide? So I've put a list of work in progress that we've been working on over the last couple of months and things that we need to close off over the next few months. There is an addition, a database addition to the Landscape Point paper. The database section of the original Landscape Point paper was the scope due to time, so we now have a few people working on that. We're also working on a performance and benchmarking white paper, which kind of covers some of the grotches, but also defines terminology and also some of the tools that you can use to benchmark and test performance for things like volumes and databases. And we've also started putting together a storage use case library, which basically pulls on the information that we have put together in the original white paper, the Landscape White Paper, and kind of covers the ideas we're going to build a library of examples of sort of best practices for different use cases. And also during KubeCon, we had a good discussion between some members of the talk and the SIG members discussing some of the processes and workflows. So I've put in a link for some of the discussions we've been having in the SIG, in our source SIG, and we'll be looking to integrate that into a document that Liz is putting together. Next slide. I noticed you're also having the Chihuahua FIS presentation later on, so I put this in just for informational purposes. It's the details of the SIG review of the Chihuahua FIS project, which we did in the summer, and some of the information will be gathered. So as I said, we're happy to move forward with that, and hopefully that information is useful for the TOC and their vanity. And that's it for SIG storage. Hello, this is Sarah from SIG Security. We have the kind of the main activity that we've been doing over the last few months is organizing ourselves so that the growth in our membership doesn't take all of our attention and want to highlight the roles that members have been taking on. We have meeting facilitators, project leads, and new security reviewers. And so you can check out that link if you want to see how we're helping the group self organize. One of the outcomes of the in-todo security assessment, which was our first assessment, is the supply chain security compromise catalog. So there's a PR that where in-todo Santiago Torres, the leader of the in-todo project, has contributed their collection of supply chain compromises. And then the next step, we already have a PR out for categorizing them. We want to learn from these and help educate ourselves and the community about the threat, the different classes of threats of supply chain compromises. We also had Schwag. Our group also is very excited about our logo. We have a Raccoon secret agent spirit animal for SIG Security. So Amy, thank you for putting together stickers, which were available to the members and people interested in SIG Security. And then we decided to bestow hoodies on the members who had taken on roles. So that's Brendan Lum and myself before the intro session, supporting our hoodies. Next slide. So we had Cloud Native Security Day, which was sold out with 175 people. We had lots of contributed talks with the board there was a open spaces session, which allowed for a lot of discussion. And we got a lot of positive feedback about the interaction between the people who were attending that. We had an intro session and a deep dive. And you can see in the bottom right that a lot of the active members and new members got together for a dinner social. Next slide. I wanted to highlight a recent thing that we've done, which it which was in preparation for KubeCon where we added templates for issues. And this was, we had some of them already, but particularly we added a presentation template. So this allows anyone to contribute an issue that automatically gets tagged. And this really helps streamline setting up a cloud custodian is going to come present to us on December 11th. And I also wanted to announce that December 4th, tomorrow, Melanie Riesnik, I hope I'm pronouncing that right from Radical Radically Open Security will be giving a talk to the group. And Radically Open Security is a not-for-profit computer security consultancy that recently did open source audit of Mozilla through their MOS program. And so they're going to talk to us about how they do open security. So really excited to have Melanie read back there. And anybody is welcome to come attend. 10 a.m. Pacific tomorrow. All right. For SIG network, we had an intro and a deep dive at the same time at KubeCon. Yeah, as part of that, Ken and Matt and myself were present to help introduce the mission and goals of the SIG and provide clarification around that. And for my part, I was rather, well, both had some anxiety invoked. I think it also encouraged that there was a full room of people interested in the set of topics that are stated inside of the charter. For my part, it was particularly heartwarming to hear some network engineers or some non-developers raising their hands. They were actively interested in the topics at hand and wanting to ensure that they were, that they should be participating, that it was going to be, that the set of discussions were going to be inclusive of non-developers, if you will. That's network engineers or developers now, Lee. Come on. Yeah, that's right. Be nice to us, man. That's right. Just take some Python. That's all you just said. So, so on that, there was an article written up by Sean Michael Kerner that Kerner that was appreciated that describes some of the discussions there. I think both, you know, given reinvent this week and the pending final binding vote for the SIG, our upcoming first Thursday of the month meeting will probably be postponed this week. Pending those, those two items. Ken and Matt, anything that I missed from our discussions at Gucca? No, sounds good to me. I was just going to add that there's definitely a lot of interest in the SIG. And so I was, I was really encouraged to see that as well as you, Lee. So I think it's a good sign that, you know, the deaf is moving in the right path by, by making this a SIG. And there's going to be a lot of, there are a lot of non like network only type of questions being asked, like how to, how do we kind of integrate this with the experience of cloud native. And so I think it's definitely the right timing and a lot of interest. So it's good to see that. Thanks for putting it on, Matt and Lee. On to presentations with Chuba. Go on in. Hello everyone. Hey. Yeah, today I want to present Chuba FS. Chuba FS is a, is a distributed file system designed for cloud native vacations. It targets containerized and stateful services who need persistent and reliable storage that can be accessible like a local file system. And it's a production ready, which means that it's already been used to support more than 160 application services running on JD's collective platform. Yeah, Chuba FS has several kind of key features that we think that that are important for to support cloud native applications. The first one is the high performance. We try to optimize those five operations whenever possible to provide a user experience like the opening doors are like a local file systems. It supports money tendency, which means that the different application services can share the same online storage infrastructure. It's it has a general public storage engine, which can be used to store both large and small files. It also supports like different file access patterns such as sequential and random accesses. The fastest to be self is highly scalable because we employed a separate metadata cluster to store the file metadata. It provides policy compliant APIs, which comply with policy semantics. By the end of December. The next factor will be is that three combined API as well, which is another picture for us. Next. Yeah, so here is a is a general architecture of the file system on the upper portion is the content platform. The different colors are to present different application services running on the content platform. The second option is the shot online storage infrastructure, which has like three components, the data subsystem, the metadata subsystem and the recent manager. The data subsystem is the place where the file contents can be stored. The metadata subsystem is the place where the file metadata can be stored. And we have a resource manager to to manage resources and performs the orchestration. We also provide a separate repo for it. We also provide a docked compose to create and started the subsystems and the resource manager with a single command. That is that is the easy, easy way to to try the two bypass on the laptop. We also provide the integration with Helm recently on our release be wonderful. Also in a separate repo. So here is a is a brief history of for the project. We launched the project on on generally 2017 and open source. This project on March 2019. We three months later we we did a presentation to stories see on June 12 of this year and short of that we got our first external user over on July 4 we we present our industry paper on March 2019 and right after that we got we got our first maintain outside JD. On middle of August we released our support to set a driver we went to zero and two weeks later we we submit our proposal to C7 sandbox just recently we released the support to help to answer. So so the project itself sees on the GitHub under the Apache 2000 license. It received more than a little bit more than 400 get up stars. It has like 77 GitHub folks and we we currently have 14 maintainers from three different companies. So, so next I want to talk about the production adapters. So how to bypass has been used in JD. So the first example the first use case I want to talk about is the machine learning area. So in JD we have our in house machine learning platform. On this platform data I use to store on just on the local disk. But as our business grows, we got a lot of like training data and the training data the size is is always is mostly like on the TV level, and the content of data keeps changing. So this is the use case that well to about FS can be a good fit because it provides those opposite API's. So the migration from the lock this the solution to survive fast can just require minimum minimum engineering efforts. And on the other hand, because of this kind of migration, we we eliminates those that the show, how can I say, the limitations of the local disk space. So the first example is, is the Mexico database backup. So on the official Mexico documents, when doing the backup usually requires an OSS SDK, or REST APIs, but this kind of increasing the operation cost for us. And the backup files are processed by multiple layers of services which kind of hurts the performance. So it's kind of trouble some to to do the troubleshooting to do the debugging. So, so this another use case that we we switch to to to to bypass, because by doing so we firstly we just the backup files like we're doing on local disk. So we have the the paid cash and right cash to to greatly improve the performance. And by by checking those lock files provided by trial fast or can easily check if anything failed, and doing during the troubleshooting. So there are more like, like use cases than than just the two that I just mentioned, we can find it online from from our documents. Next. The first one has two external users. The first one is the record Nova. It's a company in China that provides visual perception solutions that use the case is more like a using to offer us to store those are large number of small inch files. And the second one is the function which I which is in commerce vendor of industry, industrial supplies that to use the case is for them. Why is the engine lock storage for for as cause a settlement, and the other one is the product image storage similar to recommend. Next please. I want to, then I want to talk about the scope of two of us the ordinates and as well as how it compares to to existing since our projects. The one from from the distributed file system area so one of the famous one is the system and it's the successors, the classes, which is Google's in house solution, working together with work from the public cloud side we have AWS EFS and many others. We also have a self FS, a glass FS. Those kind of public open source source story solutions. We have a very comprehensive comparison from the performance and stability side. We use with those open source solutions in our paper. Next. How to buy fast relates to other since that projects. The first one is the most important is a quality is because we use to buy fast to support our content platform in internally in JD. It's really large scale. We also do this, the home support as the patch manager. We use the promise us as a default monitoring system in the file system. So we plan to to you to integrate with Rook as our story on straighter for for collectives. Next. So here is a list of the current system storage projects. We have it is the open EBS Rook type TV, we test and not home the new member of the sandbox. We see there's still a missing piece of, of the district five system there. And we think that that's why we think that a true FS can be, can be a good candidate. So we actually present, we did a more technical presentation to the story see on June 12 of this year. There are lots of discussions and feedbacks on doing the presentation, but at least point there's no outstanding questions related to the project. But besides those are like the technical questions and feedbacks that you feedbacks with variable feedbacks that we want to bring out to table. The first one is, is the, the score below to buy fast. So the 12 s is, is, is designed to for the photo services and applications where most of the rights as potential. So although we support the random rights, but in our case in our use case in internally in JD, most of the cases are still like special. And the fastest in itself is not designed for the case where restrict restricted meta data consistency and the optimistically are required such as direct L, which means that the user just bypass those OS level caches and just talk directly to to the faster in itself. This kind of personal performance and it's not recommended recommended. The second thing want to bring up is, is the access integration being being asked to do in the doing the presentation. The, the setting says I integration needs to be up to date. So, so before those suggestions and feedbacks we remove the CSI comment or components, and we create a loop, a separate repo for it for the for the updates of the set the driver. We also submitted our PR PR to the to to the community for the reviews. Next. So, the, the, the last slide I want to talk about why is why, why CSI sandbox. So there's three reasons. The first one is the neutral home. And the second reason is that by finally the sandbox we have a clear like governance and it's a safe place to explore those collaborations with with others. And the second, second reason is the alarming with with since admission on based on our companies to try to strategy we want to get a better public visibility. We want to make a cloud native computing ubiquitous. And the third reason is the project itself. The two by five project has a strong relationship with, with other since that projects because JD is completely ecosystem in production. So we, as an end user, we have the experience of all for the five system itself, and how to put it into production and what kind of like for requirements needs to be used to look, be looked like, in order to support those cloud applications. So that's another strong point for us to join the sandbox project. And yeah, that's it. Next. Yeah, that's it. Thank you. Any questions, we can move on. Hi everyone. We're going to talk about the operator framework today. I know that we specifically had mentioned operator hub earlier. It's kind of bundled in its current formats I'm going to talk about all of them. I have some ski. I come from core West was one of the very early employees there. So I'm presenting on behalf of red hat but really we built this community for several years now starting core West. So the operator framework is really about a gap that we see in the cube ecosystem around the next wave of running applications these you know really advanced distributed systems that need active care and life cycle management. This is why we bootstrap this operator concept, and it's been proliferating all over which is great. Next. So we really see a gap in building these things, running them as well as discovering capabilities about them. And the framework has a number of projects, some projects to address these needs you can all find them on GitHub. Probably heard of most of these. So and I'll mention that these slides I'm going to go through really quickly I mostly want to have a discussion today but they're included so if you want to reference them later on. Next. So quick history of the concepts. We invented this pretty early in the cube ecosystem in 2016 with a few bootstrapped operators from core West. Next, and then we rapidly progressed into a ton of operator community building, along with red hat and others that are unlocking kind of stateful workloads on Kubernetes. We have an operator sick started under the open shift Commons banner and would love to bring that into the GNCF. We're also involved in a lot of the add on discussions for Kubernetes and various six cluster life cycle I think that's since migrated to a different sick. So we're we're kind of discussing things all over the place. Next. And then most recently, we made a big launch with operator hub.io. This is a place to discover these operators, most importantly, addressing the gap where it's not just popularity but it's what are the capabilities of these how mature these operators and what exactly do they bring to the table. You know some of them are going to vary in maturity and that's really important for production workloads. And where we are today is we've got a ton of mind share and adoption. You know, I've got a stat slide but you know we've got 1000 plus forks of our SDK so folks are using our tools to build these operators. We have a number of commercial products that are delivered via operator number of CNCF open source projects that are delivered via operator. And this is kind of seen as the way to bring this next generation of workloads on to Kubernetes. Next, I mentioned this just because I know there was some talk in chat of the definition of an operator. This is kind of our definition. And I would like to continue to see that evolving over time and we'll work with that SIG, but it's really embracing Kubernetes extensions in a domain specific way that domain specific way is really the gap that we see. You know every database is different every application is different and the experts can bake that knowledge into an operator. Next. So why use the operator framework we've really broken this down into a number of different personas. We think that are all equally important. We've got developers that are want to build an operator and you know don't want to do all this repetitive scaffolding. We can do all that for you and bacon some best practices. We've got facilities for cluster admins to control which operators can be installed on clusters, you know operators have fairly high permissions depending on what they're doing. That's why they're so powerful, but there need to be some guardrails in place, or else you know your cluster can run a muck. And we think you know production use cases really need those guardrails. The users of your clusters why we're all here while we're providing Kubernetes out to our end users. They need a cohesive set of tools to discover what services are on those clusters. If they want to wire front end to a caching layer to a database. All powered by operators, knowing how to do that is really important to our life cycle. So the main tools that we have for building these operators we think it's really important to address the entire spectrum of skills that folks have and we want to meet them and their model of the world. So to do that we have kind of a no code operator if you will which is building a helm chart into an operator. And this is a way to get kind of the human out of the loop and have a programmatic kube native interface to helm without using a CLI. We have ansible tools so if you're more of an operations focused team and you know how to write ansible or have an existing investment in ansible playbooks you can build those into an operator and do a whole ton of different really cool things. Then we've got our go SDK which builds on kube builder and some other CNCF projects. And that's really the power that you get from all of client go and all that associated tooling and all for all three of these framework which we think is really important. This is another gap that we have where we need to up level these operators that they are trusted by the community and we think testing and validation is really key to that and so we've got that built into our SDK as well. Next. Another framework is the life cycle manager. This is a huge gap that we see in the ecosystem so far, which is, you know, not just installing an operator but managing it over time. CRDs are cluster wide currently and so you need to not have conflicts with those and if you have operators that are dependent on other CRDs being installed on cluster, the life cycle manager has a ton of dependency tooling involved to get that done. So if you install like what we call you know like a top level operator that might install you know the front end the caching layer in the database for example all in one go and map those dependencies down. We think this is really key for having a really wide ecosystem of operators that work together and talk together. Next. And lastly I mentioned this earlier we've got operator hub.io. This goes more past kind of just popularity into actually looking at those dependencies that we were talking about looking at the maturity and the capability model that we have that I shared in chat earlier. It's a recognition that we want these operators to be trusted and you know when they're running your storage and your production databases. And your important ecommerce applications, they need to be really bulletproof and so operator hub has some automated testing as well as some PR based review process to ensure quality. And we think that's really, really important and that's a gap that we don't see addressed anywhere today in the ecosystem. Next. Here's some stats really quick like I said the 10,000 SDK clones I think you know shows a ton of people are using this. We've got you know 600 folks on our SIG mailing list. And so that that shows you that there's a gap in knowledge here and so we've got a community that's built up sharing best practices, talking about the next operator frameworks that we want for like a Java SDK for example so we've got a ton of engagement here. We've got 207 combined contributors to all of our sub projects and a ton of different unique organizations contributing as well. So we think this is a really vibrant healthy ecosystem. Next. Here's just a quick sampling of tweets. You know these come in all the time but folks are seeing that the SDK is a really easy way to jump into this. Other projects that come afterwards like the life cycle manager and operator hub speak against, you know the different steps that you're going to have after you start building an operator. And so we've got folks like I said building commercial products. We've got really healthy open source ecosystems using these as well as just you know, regular end users build a prototype and things like that. Next. I think the most powerful kind of endorsement of an operator we've seen is, you know, handle these quotes I just pulled off one of them from coupon EU earlier this year. Basically if you're going to run, you know, a complex distributed system, in this case, like a staple storage. The advice is really you have to build an operator to do this. You need something actively looking after something rebalancing data reacting to monitoring alerts, doing anomaly detection. These things that kind of live at one level above where our cube resources are today, you know, a staple set isn't going to get you all the way there. And so we think this is that's the huge gap that operators fulfilled in our ecosystem. Next. So here's a bunch of our NASCAR logo slide. A bunch of these are CNCF projects, as well as open source communities, commercial entities. We've got the logos on the bottom of this slide are companies that are building internal operators. These are the ones that at least a public we talked about that we know that there's a lot more doing it. And I think that's the power of you know they're building internal applications to talk to these open source operators and really fostering that ecosystem and all the interconnections there. And as we build up more and more of this, I think we're going to see this start to explode even more than it has today. Next. So this is the meat of it that I really want to talk about. So we've got a lot of overlap and engagement on sick app delivery. I'm not going to walk through all of these but we're we have a lot of things that we can bring to the table to address some of the gaps we see in this ecosystem. And I think we're very well aligned in that regard around some of the different things a lot in the life cycle manager in the SDK that we can bring to the table there. Next. And so I wanted to address some of as we presented to say gap delivery, I forget when it was about a month or two ago. And here's a quick highlight of some of the feedback that we got via some emails and the chat during the discussion. Unfortunately, we ran out of time in that meeting so there wasn't a lot of kind of live voice to voice chat. But this is a summary of some of our PR conversations and stuff around community governance. So we got a ton of feedback on that. So we have opened some PRs and some docs about our community involvement in some of those regularly scheduled meetings. It'll, you know, very much looks like any other CNC project. We got one, a lot of stuff around packaging formats like do we need a new packaging format. Basically are the short of this is we're not super opinionated under this, you know, there was like, oh, is, is helmet packaging format. We think the life cycle manager and some of the stuff that's included in that specification go beyond just packaging it's all about, you know, reacting in dynamic dynamic environments registering the CRDs and dependency model and how do I an up valid upgrade past between operators that are way more powerful than just like you know the packaging format. There's still some discussion to go there, but I think we bring a lot to the table, like, you know, late binding of operators as they're installed and said that I'm managing CRDs was another topic, you know, question can helm do this. The, the important part is not just, you know, the, here's the CRD YAML go throw it into the cluster, it's the dependency management, it's the recognition that these are cluster wide and they need to be, you know, it's a privileged operation to install one of those. And so it's, it's a really key thing that we think the framework does well in the life cycle manager that's not addressed currently today. And as of now, I think the helm three community is saying that CRD management is currently out of scope doesn't mean it won't be in scope later on. So just wanted to summarize that discussion you can follow those links for more. And then last, I know there was some mention of Kudo and other SDKs. And so we've been had a great QCon discussion with the Kudo team I think there's a path forward it seems like it makes a lot of sense for Kudo to come in under the operator framework as a fourth type of operator. And so I think we've got alignment on both sides for that so we're really excited about that. And like I said we've been discussing kind of all over the place in different SIGs and hope to continue to do that as well. I think that is all that I had on this if you can go to the next slide I think it would just say questions. Oh, never mind one more slide. So here's a big list of some of the other CNC projects that we've got operators for. And things like that, like I said we're driving stuff with Kudo and the helm folks I saw Matt has some comments in chat if you want to just chime in really quick that would be great. So I think we can start the discussion. That's all the slides I wanted to get through. Matt, what was I wrong. Okay, so, so there's a few things about how one is with CRD management and this actually gets complicated I know that the operator lifecycle manager makes a number of assumptions. But as I poke around the Kubernetes community, there is no one way to do CRDs and so Helm wants to provide a bunch of the features. But the problem has to do with coming up with what are the right ways and the right patterns and I know the operator lifecycle manager makes a number of assumptions and that shuts out other cases, and it becomes a problem of aligning around CRDs. And that's really something that the Kubernetes community needs to do and so Helm wants to and has certain features around CRD management. In fact, many people do install operators with Helm today. And CRDs and everything at all. And so I think this is something that the Kubernetes community probably needs to come together around, but it's not the case that Helm doesn't want to. It's the case that Helm isn't trying to shut out certain assumptions and cases and so we're conservative in what we do with CRDs until we're able to solve for many of the cases out there and I understand the operator lifecycle manager makes more assumptions, even those beyond what the Kubernetes community is doing right now. And I think that's kind of the difference. But during the Helm 3 lifecycle, we will be adding more CRD features and there are CRD features in Helm to manage it. People do deploy operators with Helm. And so that even gets into some of the operator hub stuff. I know there's the testing framework. And I think that requires the operator lifecycle manager, is that right? Operator hub? Yes. So it does require the lifecycle manager to, like I said, go beyond installation so it's upgrading CRDs if they need to be upgraded, providing that path, providing the dependency resolution. But doesn't that make the operator lifecycle manager a hard dependency on an operator being listed in the operator hub? So somebody could go maybe write one in Rust and deliver it with a Helm chart, but it couldn't be listed in the operator hub in the current flow because it wouldn't fit with the current testing framework and dependencies. Is that right? Yeah, today that is correct. Is there any roadmap around opening it up to operators written and deployed and managed in other methodologies? The roadmap has kind of been contingent on some of these discussions. I think we're very much open to that. And we've got some changes underneath the hood, what we call our bundle format, which is a little probably too low level for this discussion. But that would open up, you know, being able to package things as Helm charts and coming, getting rid of some of those dependencies. But I think it's basically up to you. We kind of want to know what the future is before we commit to that stuff, but we very much are open to quite. The other thing that I noticed in here, and since I've got you want to ask about is if you go back to your slide on the operator lifecycle manager. There was trying to find it myself here. There was talk about subscriptions, if I remember right. One slide back from this. Yes, subscription for your operator. This kind of puts the operators using operator lifecycle manager into kind of a a SAS model or service catalog type model that works great for things like my SQL as a service or postgres as a service in a cluster that's kind of added on as a cluster extension. But there are many cases where people are using operators in an entirely different way that is maybe my one bespoke application and it's not a SAS within a cluster and the subscription model and those ideas don't fit. How does that fit within the operator lifecycle manager and the operator hub. Yes, you can think of the subscription as a kind of combination of some lower level olem features and so what you could do is basically just do what I'll call like a one off install basically do a manual install of one of these versions and then just, you know, have it be a subscription installed in health check to be a olem but then not actually upgraded or you know you don't have that subscription idea. So we can address both of those it's like, do you want to be high level in the SAS experience like you mentioned or do you want to be lower level and just kind of install this and Okay, but most of this is centered on the use of the operator lifecycle manager right and it's kind of the bridge piece between all the things. I mean, I think there's a huge thing before that of, you know, building these operators and building them correctly and all the best practice and knowledge there but olem is a really big piece of this yes. Okay, thank you. Appreciate the questions Matt I think it's important that you know we're completely transparent and that's what we want out of this process and you know we definitely want to be able to support. You know as many formats as we can but to have a way that we can properly test it as well so I mean hence that's the reason for the framework, I think you can appreciate that. So, you know feedback is definitely welcomed. Yes I do. And that's one of the things we do with helm charts is we do lots of testing around those things as well, because you know I appreciate it because we do that ourselves over on the helm project. And that we also had doing this discussion is because you also want eventually to have CNCF host the operator hub is what the actual cost would be for hosting something like operator hub. Yeah, honestly, the it runs on a open shift cluster today and it's actually a fairly small one and there's not a ton of resource requirements there. I think I've got some you know Jenkins jobs that are kicked off. I don't know exactly what the cost of that is today but I, it's not astronomical by any means. Yeah, it will be great to get this because obviously eventually if it's doubted by the CNCF the CNCF should be able then to run this the hub as well. And you mentioned it's running on open shift so does it have a hard dependency and open shift or would it work on any Kubernetes cluster. There's no dependency on open shift. It's just, you know, I think it's like, I want to say it's 20 pods. I think the cluster has like three or five nodes on it, you know, nothing crazy. And so, these are all great questions where should we capture those, given the differentiation of the, the SIG versus the TOC, can we just add this to the PR and address those accordingly, as either we need to be reconciled or future items. I'm not sure how we want to the project proposal, GitHub PR would actually be best. Perfect. And so any follow up discussion should happen in that PR or thank you. Okay, perfect. Thank you. Other questions. Good. Since there are so many, would it be worthwhile to have a follow up actual discussion in the SIG rather than just try to barf it all up into this PR. I mean, I think we're pretty open to whatever seems like the best avenue for discussion. PR seems easy. All right, we'll follow up there. Thank you. Appreciate it. Thank you. All right, that wraps up our agenda and we have four minutes to go today. Any more questions, anything else that people want to be able to surface in this meeting. One item, SIG runtime. I think we decided two odd weeks ago that we were going to put that up for both to the TOC, the charter is done. We have chairs and SIG leads. I think the PR probably needs to be slightly updated, but should we do that in the next two weeks? Yes, if you can do that, that would be super. Thank you. I understood you were going to call the boat. Maybe there was a miscommunication. If you can update the PR, I will be working with the TOC to be able to call the boat. Okay, cool. Thank you. Thank you. I had one little announcement. I just sent a note out to the TOC list. I, Brendan did a great flow chart earlier this summer about project process. There's been a lot of questions and confusions about that. So I made a barked out version of that, which is PR321. I said not on the mailing list, would love some feedback. I'm just, we're trying to scribe what is the intended process. So have no attachment to the boxes and arrows there. Just want to help drive it to documentation. Anything else? Okay, good to see everyone. I will be posting this up on YouTube as soon as I have the recording and we'll see you all next time. Thank you. Thanks. Thank you.