 Hello. Hello. Hey, Chris. Good morning. Hey, good morning, Ken. Good to hear from you. Yeah, happy new year. We've talked since the year. I know, I know. Seems like it's been for years. We'll get started about five past. I shared the TOC's deck on Slack and the chat, so for folks who haven't seen it yet. Brian Grant is on. Jonathan Bull, Sam Lambert. Hey, Alexis, how's it going? Hey. Good. We'll get started in a few minutes. Right now is Ben, Brian, Camille, or Solomon on. I haven't seen them yet. Ben mentioned that he might be traveling and not able to attend. Okay, no worries. Thanks for the heads up. Just trying to see if Cantrol is one of the 415 numbers. Usually dials in in the morning. Got it. I hear you, Brian. A little choppy, but we hear you. We'll get started in two minutes. All right. Looks like Ben Heinemann just got here, so that's good. Definitely have quorum. And it looks like Camille just joined us. It's Alexis. You want to give it a go? We got 8 out of 9 TOC members here. We're just missing Solomon, so I think we're having enough. Hey, everybody. Hey, Alexis. Solomon, a message in a sec. Yeah, no worries. So, where's that slide deck on? There we go. All right, so thank you everybody for turning up today and hoping to be fairly efficient because we've got a few project reviews and updates to go through. Just skipping straight ahead to slide number 3, which is the, sorry, slide number 6, which is the TOC elections. Is it Dan or Chris who's going to just walk us through what's happening there? Why don't I do it just briefly. So folks will recall that the charter was put together two years ago that lays out how the TOC is elected. And somewhat oddly, all of the original members were given three-year terms but then are supposed to have one-year alternating or two-year alternating staggered terms going forward. And so, we put together a schedule that makes that happen starting next year. But the key piece on slide 6 here is that we have Sam Lambert was elected by the end-user community back in October. And this year, the two positions that are up and need to be re-elected or replaced are the TOC appointed TOC members. So there's, just as a reminder, six appointed by the governing board, one appointed by the end-user community, and those seven elect the last two. And those are Bryan Grant and Solomon Hikes. And after they're selected, those two will just straw straws and whichever two and one will be for one year, one will be for two years. And then once those nine people are together, they will select the chairperson. So they could either re-elect Alexis or someone else. So my, I believe right now that both Bryan Grant and Solomon are interested to run again or to be re-nominated, but it's really just up to the other seven TOC members how they want to do it. And I'll stop there. Alexis, any questions or concerns about that? Okay. Only that it'll be good to keep people reminded of it as we get closer to the time, because I think it's important that everybody is aware and participates and so on. Yeah. I mean, the other very big date that's coming up is a year from now, January 29, 2019, when all six of the GB selected, that is the governing board selected TOC members, are going to be either re-elected or replaced. And this will be the only time ever in CNCF history, I guess the second time, but the last time that all, that a majority will be replaced at once. And so they'll be selected and then half of them will, they'll draw straws and half will have one year terms and half will have two year terms. And so, I mean, one of the ideas for the TOC contributor status and for people contributing to that is to allow people to kind of throw their hat in the ring and say that they'd like to be considered for this position. Okay. Anyone else have a question? Okay. All right, so I'm going to move on to slide number seven. So I wrote this today because I wanted to share some of the, one of the key topics of discussion that took place at, in Austin, between not only the members of the CNCF TOC who were present, but also quite a few other people. And I think this is just important to communicate to everybody. So I've laid it out in sequence, but forgive me, I'm actually going to stop typing, stop typing, please. So there's a lot of confusion around what it means to be a CNCF project at the moment, because we've got three tiers, inception, incubation and graduation. And graduation is something that projects are going to do soon. But in particular, there's been confusion around inception, because a lot of very new projects have come in. And I believe that, you know, not every single person in the community necessarily feels that those projects are equal merit. So this is caused a lot of complaint and concern. You know, we originally created the inception grade, because there were quite a few areas of smaller projects, where there was real demand for early stage collaboration. And if the CNCF did not provide a venue for that and bring its capabilities to bear, then those projects would miss out. And we would also miss out on opportunities. But unfortunately, as a side effect of that, there has been a perception created that inception projects are equal status to incubation projects. I had many conversations with people, some of whom were quite frankly, quite angry about the behavior of certain people and projects, often wrestling on completeness understandings about the status of these inception projects. So, you know, that is an anti-pattern. It's causing discomfort within the community. And also broader confusion with non CNCF projects is for me a big problem that we need to solve. I also don't want people to think that, you know, once you're an inception project, then it's job done and everybody can congratulate you and give you $30 million of VC money or whatever it is you're looking for. I'm afraid that, you know, using inception to get VC money is also a bit of an anti-pattern as well. So, there are some actions that we need to take about this. We really need to fix the perception. Dan, would you mind just giving us an update on your plans for the website in relation to D, the new marketing person? Oh, just that we're going to separate out the 14 projects and so that you can see what your graduated incubated inception and probably do, you know, sizes of logos kind of thing, graduated the biggest. So, I mean, there's much more ambitious efforts for the website underway. There's a whole redo for it. There's a bunch of marketing material that's going to explain how the 14 projects fit together, hopefully more soon and the order you want to choose them on. But the most immediate response to your concerns is, I think, just marking it much more clearly what level each project's at. And what it means? Sure. I mean, it needs to say pretty clearly that inception projects may get pruned because we expect some, we expect some of them to fail. I mean, I think that is built into the system. Okay. And there is an annual review. I don't know how many people are aware of this. Again, based on my conversations in Austin, many people are not. It's an annual review which can lead to an inception project essentially getting booted. Which linker D is doing today, by the way? Yep. So is there a different level of due diligence that needs to be done between those level of projects? Should we better define the thoroughness by which a project is evaluated then and have criteria from one to the next so that eliminates the confusion? We do have that in due diligence guidelines. There's a very clear delineation between the different levels of project and what the requirements are. Is that Quinton? I don't remember saying that. I'll pull that back up. Yeah. Hello. It's Quinton. Hi. Hi. It's Aaron. Okay. I'll pull that back up and look at it. I'll just have to refresh my memory. Thanks. Yeah. I mean, I think just in general, do have a look at the operating principles, the DD guidelines and the graduation criteria. And by the way, if you can't find those things, that's also a problem. And you need to tell Dan and Chris that you're struggling to locate them on the website. That's been a problem with some of our materials, which have been created more or less on the fly. But we can sort it out now. We have a reasonably substantial backlog of tasks there. But yeah, do go there first. And if you feel that the documents on our processes are not clear enough or just have the wrong criteria, it's quite okay and a good thing to bring that up in this context. It makes sense. I don't know how this correlates with your discussions, Alex. But my perception is that the criteria are actually fairly clear. It's just that the marketing around being a CNCF project or not is not sufficiently clear on the fact that there are these different levels and what exactly they mean. So that sounds to me like where the problem lies. I hope it's just that. And I believe that the intent is with the new VP of marketing or senior director of marketing D that many of these issues will now be addressed. But, you know, at the risk of deluging these people with requests to do please send thoughts on this to Chris, Dan and D, even at risk of duplication. And can we also get a handle on some of the bad behavior that's happening? Because that to me is actually the actual fundamental problem. We should be allowed to have a big tent that has nuance to it. I think that there's a lot of societal value in that in terms of having more CNCF projects rather than fewer. But that does rely on a lack of bad behavior around the CNCF from using that as a to bludge in others. So could we understand where that bad behavior is coming from and rectify it? Well, I've got one item number nine on the slide, which you may not be able to see Brian as you're probably dialed in from from the streets. But, you know, there was some complaints about people kind of accidentally on purpose falling over into the world of marketing talk around their TOC presentations and due diligence documents and really kind of getting over their skis in terms of how they were describing their project. And, you know, that really is not necessary or desirable. It causes a lot of ill feeling, especially amongst other people whose project may be more advanced or, you know, better in those areas. It feels like a land grab sometimes if you're not in the project that's kind of in question. And this is one of the core issues that arose in what one of the TOC members amusingly calls storage gate. So, you know, we think we must we've asked the couple of the storage guys to sort of dial back a bit on that. And I think we need to kind of remind people from time to time. Okay, so that's at least I mean, because we and we had the same objections to those presentations that came through. So, at least it's something that we're all probably aware of. But I think we do need to continue to make those make it clear what we're looking for in terms of the exciting. I think we've seen the same thing in these kind of grandiose claims and we did the grandiose claims are counterproductive for a lot of reasons. So, thank you for thanks for the education. Yeah, and if you see anyone doing this kind of stuff, just just let me and Chris and Dan know we'll have a quiet word with the people. What I found personally in these conversations is often people aren't meaning to do this or not realizing what they're actually doing, perhaps as much as we might think, you know, they're very enthusiastic about their project. They feel that they're in the spotlight for a key moment and they just get ahead of themselves and the message back has simply been look, you know, now is not the time to do that. If your project is successful and you want to go market it, that's the time to do this. The TOC will vote based on facts if it can or try to and, you know, you're making that hard and not easier and actually doing your course damage. And also because we're not king makers, we're not looking for the best. We're not looking for superlatives. We're not looking for even necessarily novelty. We're looking for something that is useful and pragmatic and has a functional community and fits within our other goals. So I think maybe educating people about what we are and are not looking for, you actually don't need you to be penicillin to be a CNCF product. Okay. One of the concerns that came up at the conference was that we do feature, you know, we did have like all of the logos, including the inception projects kind of featured throughout the conference. I know we're going to fix that on the website. Are we planning on doing something about that with future conferences, etc.? Yeah. I mean, we'll definitely take that feedback, Camille. I think we could delineate the logos somehow to signify the difference between graduated incubation and inception projects. So let's, we'll take that into consideration. Yeah. I think the levels of sponsorship of the CNCF are a good example where it's very clear. I think people, everyone understands what the difference between a platinum and a gold and a silver member is. But I don't think we've done a less good job with the projects themselves. We're going through that exercise right now for the landscape and the website. So we'll definitely apply those lessons to the conference. Thank you. But not just the conference, to everywhere. We hear this feedback. So we need to make changes. Okay. So a couple of other examples. I don't think we need to deal with this today, but there were some questions around, you know, what is the purpose and exit strategy for working groups? You know, what are they actually doing? I think different working groups have different assumptions there. So that's a to-do item for us, I guess, soon. And in terms of why, why do prospective projects actually exist at all? You know, some, some of the projects that I think the TOC members most admire are the ones that have come out of organic problem solving at end user firms, such as, you know, Prometheus, originally at SoundCloud, Envoy, Envoy, sorry, originally at Lyft, et cetera, and Yeager at Uber. We'd love to see more of those projects. And we realize that that is not the whole universe of projects. And there is, being an end user project doesn't mean you're therefore not a vendor project. But there is clearly some tendency of projects which exist in order to provide justification for a vendor to exist, where there are no other people in the project yet. And I think that has caused the number of people's eyebrows or more to be raised. And we'd love to see a bit less of that. So if you are one of those projects, do please strive to come up with at least one other reason to exist. Otherwise, I think, you know, that your, your prospects are probably going to dim slightly. Okay. Anyone else got any questions about this section? So I'll move on to the other stuff quickly. I just have a quick comment. I think I have not, I have no idea what storage it is. And so I, coming in without any actual data points, it's hard for me to understand what exactly is meant by don't be too marketing. And I understand the intent. I do think we'll need some very clear and specific guidelines for projects because everyone is going to market themselves one way or the other. And they're going to be looking for cues on what we appreciate in both form and content. But I think we should not fall in the trap of mentally separating projects that are pure of evil marketing and those that are not. It's just a matter of what do we like to hear and what are we actually looking for beyond the words? I think we owe it to the projects to be very specific about that. Because if we say, don't be marketing, then we're going to open the Pandora box, I think. I agree. Let me, hang on, let me give two examples. So one example is, is, is Rook where there was a lot of contention around whether or not Rook quote unquote could or could not work with another storage project than Seth. And there was so much confusion around this that I think the Rook guys kind of felt pushed into saying, well, of course we can work with non-Seth projects. And then there were other people saying, well, you don't, that's just lies. And then, you know, actually, I'm not sure if Rook guys even wanted to say that. And then think that the the TLC didn't particularly care that they needed to say that either. And another example is around anything to do with data storage and high availability where there are so many different edge cases. But there is a, what we don't want to see is somebody basically boasting that their product can more or less do everything and start, you know, bending the rules of science if we can. Yeah. And I would say just to add a little color to that, I think both Rook and Bethes are fundamentally sound technologies where we ended up tripping on the fact that they were not necessarily not maliciously, but inadvertently maybe misrepresenting or over-representing the problems they were solving. And that's what we want. So we want the description of the problem being solved to match what's actually being solved. Because in both those cases, they're fundamentally good technologies. And as soon as we kind of understood what they actually did, as soon as we actually understood what they did and do, there was great clarity. But it was in that misunderstanding is where we got hooked up. That's what we want to look for. Okay. Thanks Solomon. Next slide is actually aimed at you. So we've got this slide on project health. So a few people have asked, how could I help be a better intermediary between project leads and the TOC and the CNCF generally? This is something if you want to help with, please get in touch with Chris and me and Solomon. We're trying to kind of get a gang of people together to help out with this a little bit. I mean, it's still not quite sure what we want to do. But I think that we'd like to just make the whole process a little bit more efficient and scalable and give Chris a bit of support as well. Because he's been doing all of it last year. One thing we had talked about previously was creating some documentation somewhere about what CNCF is doing for all the projects currently. So we have some sense of what whether the needs of the projects are being met by pairing up, you know, what did the project ask for and what are we actually doing for them? Is that underway? Chris, can you type out? Yeah, I mean that kind of exists in current form if you go to the service desk. So we have an idea of what projects have requested, where we are with status and so on in terms of what we offer. It's all on the service desk GitHub page. I know what we offer is now on the service desk and that is awesome. But in terms of some of the things that we're doing for projects, like I know there were monthly meetings discussing how things were going with projects and I was totally actually unaware that that was happening. So I think some of the requests are actually captured within the service desk in terms of what we're actually working on for projects. But that may not cover some tasks like me meeting monthly with projects to figure out what's going on, if that makes sense. So I think what you're asking for is something more than we currently have. Okay, so if a project say asks for governance help, then there should be a service desk? Yeah, that's what we generally require now moving forward and it's been working out pretty well. I think we have, you know, 40, 50 tickets already in the service desk. So people projects have been using it. I'm not sure if all projects are fully aware, but it's definitely being used. So hopefully this year it gets picked up more. Just one comment on the overall health check. In a different context, I've had a similar experience with sort of hundreds of projects where we needed to get a sense of which ones needed help and which ones are kind of fine, which sounds like the same problem here. And one thing that was quite successful actually was to put a very simple standard set of metrics together and have them presented. I mean, in that case, it was once a quarter. In our case, maybe it's once a year or less often than that. And then, I mean, the metrics obviously don't tell the whole story, but at least they give you a basis for comparing projects against each other and saying, this one looks more healthy than that one. Let's dig under the covers or not. That works extremely well in some places. Thanks, Quentin. That's similar kind of what happens at the Apache Foundation where projects are required, I think to do a quarterly or annual kind of report to the board. We could do something similar in CNCF, but I've kind of avoided that for now. I was hoping we could eventually go to a solution that's a little bit more automated pulling from GitHub and other information, but I haven't fully got that solution baked yet. Yeah, I guess the main point is to make it, you know, not a fluffy report, but actually a set of metrics. Yep. Quentin, can I just make a quick pitch? I think everybody here is familiar with the DevStats work that CNCF has been funding. I just pasted in the link for Kubernetes DevStats at the bottom of Zoom, but if you scroll down to the bottom of that page, you can see that we're actually up to covering the first eight of our projects now. So through container D, and we're hoping to get the other six done in the next month or so. And so this is a ton of statistics pulled from GitHub in kind of an interesting way, and I won't go through it all, but maybe we could even have a TOC presentation on it in another month or two. But this is providing a bunch of statistics that does provide some health information. The other thing that I'll say is that CNCF is going to produce our first ever annual report on 2017, hopefully in the next month, that will provide a bunch of data about what CNCF is doing and also how our projects are going. Cool. Thank you. So yes, this is an area which is under development. And if you are interested in acting as a TOC contributor, all capitalized, please let me and Chris and Dan and Solomon know we're going to try and bring some folks together to do something useful here, including some of the things that were mentioned. It's going to be an ongoing thing. All right. So now on to slide 10. If we can try and do sort of five to seven minutes for each of these projects, I think we're in great shape. So Brian, I think you are up probably for most of them. So could you take us through Spiffy? Yeah. Let me pull up the slide here. Yeah. So Spiffy is all about service identity or application identity. It is inspired by a system at Google called Lowass, Low overhead authentication system. And what it's really trying to do is bring fine-grained authentication to cloud-native applications. Without something like Spiffy, there are kind of two patterns that we see. One is people not using authentication and authorization within their network and just relying on firewalls at the perimeter. And in many cloud-native scenarios with lots and lots of microservices running together in the same infrastructure, that doesn't really work so well. You also lose a lot of information about what the applications are doing. So it's not just security that you lose, but it's controls like rate limiting, it's observability, other things like that. The other pattern is people work out systems to provision credentials on an ad hoc basis for each application. Some applications have their own authentication and authorization mechanisms, especially if it was designed to be a multi-tenant service or a multi-user service. But many, many do not. And then there's a real challenge to mint those credentials, to rotate them on a regular basis, to manage those credentials in a secure fashion, to distribute the credentials to all of the components that actually need to be able to access those credentials, to audit who has access to those credentials and so on. So Spiffy is a very early stage project, but if you watched the GitHub poll request for the proposal at all, you may have seen that there was a large amount of enthusiasm from a broad variety of companies and people, including a number that are I would consider user or will be user entities. So not vendors that would be like cloud providers or the company that's actually developing Spiffy, but companies that really see a need for this in their environments. So there has been some amount of discussion on technical issues and the Spiffy-inspired team had made themselves available through office hours to answer questions. I guess at this time, I would like to know if there are any other questions and concerns. This would be an inception-level project, so I don't think there are claims about people using it in production yet, and I wouldn't expect that. But there has been quite a bit of development on it and there have been two implementations of Spiffy, one being Spire, which would be contributed along with the spec itself and the other in Istio. Okay, what's your feeling about readiness to exit DD and sort of start a vote, Brian? I think we have somewhat petered out on questions. So I think the diligence-wise, I haven't heard any requests for information that is missing. So barring that, I wouldn't see a reason why it shouldn't go to a vote. Yeah, I agree. I think it's ready to go for a vote. Does any TSA member connect to that? So I just do want to point out one more thing, which is with Spiffy and with OPA, which is next, one thing that's really important is interoperability. That's what really is going to make it shine. And they are, both projects are integrating with a number of other projects, but in terms of how CNCF could help with that, we have seen a positive natural tendency of other CNCF projects integrating with each other and also with things like the certification, conformance certification effort initiated for Kubernetes. I think something like a certification for something that needs to be a common convention in order to achieve true interoperability is going to be really important for a project like Spiffy. So I think those are areas where CNCF could add a lot of value. Okay, I mean, I fully agree with the statement about interoperability. That's going to be very important. So unless anybody on the TOC objects, I think we can ask Chris to commence the voting process with the provisor that you let everybody know that if you're a contributor or really just in the community, this will be your last chance to scream out loud on the GitHub issue. If you have objections here, we'll feel that we missed something. Cool. We'll do. Thank you very much. Okay. Can we move on to OPA's sponsor, Ken? Hey, good morning. So OPA similarly has had some good conversation on the proposal. Torrin and Tristan are both going to respond to I think all the questions that were raised. Kind of like similar to what Brian was just describing, right? There's a lot of interoperability going on with different projects at Netflix and with then some of the different other projects that they're supporting and working with today. And so I think it's ready for the TOC to vote. I know the OPA team is supportive of being involved and heavily contributing and working with the community. And so from both the project standpoint and from my perspective, I think it's ready to go forward with the vote and having a policy effort that the CNCF is forwarding on to other projects to a consumer work with, I think, has a really strong maturity level for us to attain. So I'm a big fan of going forward. Any questions? So previously there were some questions about what OPA actually is and how it works. I did post some links to some examples. I don't know if there are still questions about that to help clarify in people's minds what OPA actually is or what role it fills. And it does use a domain-specific language for configuring the rules and they're actually making some changes to that language to make it more consistent with another language called the common expression language. But a common pattern is people build hard-coded rules with their own concrete schema and those become more and more complicated and unwieldy. And Kubernetes itself has even done that. That's usually how people start. And then OPA kind of skips that and goes to the next step to a more expressive policy language. It's not a Turing compute language. There was some discussion about that. It's not a replacement for a full programming language. It's intended to be somewhat restricted. But I think those were the primary questions. It looks like they got answered on the pull request though. Yeah, in the appendix we kind of added an appendix A and B to the pull request is what it can't give an example of a rest authorization example and cluster placement example as well. Okay. So once again, TOC show of hands. Does anybody strongly object to proceeding to a vote? I see there's a remark from Kapil on the chat saying, do we have enough community and contributors? I think Kapil the answer is for inception. OPA qualifies although it is very, very early but it wouldn't qualify for incubation. So TOC show of hands. Does anyone object to putting this to vote stage? Nope. All right. Chris, can you initiate that as well please? And again, we do want people who are in the community, folks like Kapil raising questions here to get a chance to put in the last thoughts before as the voting process is happening. So do please advertise that as well. Especially with these two projects, Spiffy and OPA, I think they're particularly important because they're so new. We'll do. Thank you. Thank you. Okay. I think Vitess is back to you, Brian. Yeah, Vitess first presented last spring. So I've had a while to think about it. It is, I think the terminology we landed on was storage middleware. It orchestrates sharding and scaling for my SQL. I posted a link to a demo that was presented at KubeCon, CloudNativeCon, that shows Vitess in action. So you can do things like add replicas and they automatically load their state and then receive load from clients. It handles failover, automatically adds effectively a control plane and a lot of instrumentation and monitoring to manage my SQL instances. It uses a topology server to keep track of that state, stores the state in NETCD. And well, that's one of the key value stores it can store state in. And clients interact with the control plane through GRPC. So there are already some ties to some other related projects there. It has quite a number of users. I think they were mentioned in the last presentation of the tests to the TOC. And since it was first presented, it has had more an increase in both production users and non-Google contributors. So are there any further questions? I have one very good question, which is, did we decide that the terminology data orchestration was appropriate here or not? Was that a different thing? Or storage orchestration? I think at the time we discussed storage middleware, but yes, it effectively does orchestrate the configuration of my SQL instances. Can we use that terminology or something using the word orchestration? Because I think if you call it storage middleware, do we do the same phrase that people used to do in the 80s for other things? Sure. And there's also some request routing and such that it does. But sure, I'm fine with that. One other quick question, Brian. I've heard it said, and this is not my opinion, but I've heard it said that relational databases like MySQL are kind of not considered cloud-native, and I would have thought that such a question might come up in the review. Did anyone raise that and has it been addressed? We did discuss that earlier on. The way I view the test is it's a bridge to cloud-native from people who started with SQL. But in terms of what cloud, if we look at cloud-native attributes, like being able to operate in a containerized, dynamically scheduled environment, the test definitely does that, coping with failures and scaling. The test definitely does that. I mean, it holds all YouTube metadata. So it has been demonstrated to operate in production at very large scale, and it runs on Borg inside of Google, which is a very dynamic environment. So it's the orchestration part that it does that makes it be able to operate it in that cloud-native fashion consumed by cloud-native applications. I in SQL itself provides, if you relax some of the semantics, there's no reason why it can't be used by cloud-native. I would say it's not only no SQL. You just have to ensure that you actually achieve your cloud-native goals. And I think if it can scale to YouTube levels, it can probably, scaling is probably not an obstacle for most users. Is there a way to update the proposal with specific use cases? I found that useful in other due diligence, where in the proposal they listed very specific use cases. This one's very MySQL centric, so maybe having that at the end would help clarify that there are other uses for it. What? I thought it was just from MySQL. It is just from MySQL, yes. What else were you looking for in terms of use cases, Erin? I guess I misunderstood, Brian. I thought you were indicating that it could be easily adopted to use something besides MySQL, and maybe that was my misunderstanding. No, I don't think it can. I've heard some discussion about maybe extending it to Postgres, but I don't think there's, I'm not going to pretend there's been any substantial effort in that direction, as far as I know. I was one of the people who offline raised concerns about whether how much this was matching with the cloud native charter, but I think Brian provided a great deal of evidence that it is. I think this is also a good example, actually, of something that is very useful while only really supporting one open source product, MySQL. I'm supporting most moving forward with it at this point. To be clear, I agree both with that statement and what Brian said. I think we just need to bear in mind that there's also a public perception of the CNCF thing here, so we need to just very clearly communicate why we acknowledge that superficially this may look kind of non-cloud native, but here's why we believe it is, and have accepted it or evaluated accordingly. I've got to say there's a lot of excitement from me around the tests, certainly. It's as we progressed on our journey to Kubernetes, and we're looking at placing more storage in containers, but test is one of those pieces that's unlocking that for us, and it's a project that we're definitely adopting. Sam, could you just remind folks who the we is there? Sorry, we is GitHub, and that would be to power the core storage for everything you see on the website, so be issues, poor requests, all the metadata that's essentially not Git. So on the Vitesse website, they mentioned Slack as an existing user, but under the self-hosting concept, I think GitHub seriously evaluating it or plan to move there is also a pretty strong statement. Yeah, we have the prototypes up and running, and we're looking at open sourcing some of the abstractions we're putting in place to Vitesse, and generally it's a strategic piece for us this year, and so I'd be strongly in favor of moving forward. This is Suguru, the tech lead for Vitesse. Thanks to GitHub, you guys have been making contributions too. Yeah, absolutely, and it fits in with some of the other tooling we've shared open source for us around Ghost and Orchestrator, starting to plumb together to be a single kind of ecosystem for MySQL, and enabling people to do more with kind of stateful workloads in a way that's not necessarily massive rated up servers sat inside people's data centers and taking more of a it's okay to lose them approach. Hey, sorry to butt in everybody, good conversation. We've got 15 minutes left and three more of these to get through, so I'm going to call stop on this. Would Vitesse be our first non-Go cloud native project? I hope so. Yeah, it has some Go code in it. Okay, well Vitesse is primarily Go, sorry. We have that boy as being C++, Chris is reminding us. Okay, cool, so once again, TOC objection followed by Chris, yes or no? Any objections to proceeding to a vote from TSC members? So the one action item was adopt the storage orchestrator terminology, is that it? Yep, some kind of orchestration, data or storage, whatever you like, but I think it's just Maybe data is better. Modernized, is it? Okay, thank you very much. Go for it, Chris. Vote commences. Cool. And then we have Rook, which is Ben, who I think is on the call as well. Yep, I'm on the call. Folks, hear me? Yep, okay. So speaking of data or storage orchestration, Rook is a storage orchestrator. It's a controller control plane for running storage systems on container orchestrators focused on Kubernetes today. So Rook is not SAF or Gluster or many or any of the other storage systems that it may orchestrate, but today it has a strong connection with SAF. It was the first storage system that it orchestrated. The team has been working to make it so that other storage systems could also be orchestrated. Why is Rook interesting? Many of these storage systems don't have control planes, but are manually operated. And really, I think what makes things like Rook cloud native, in fact, the things that I think makes something like test cloud native is that they can be operated in a cloud native way. So there's been a lot of discussion around Rook. I think the two big pieces, from my perspective, that are worth calling out is one, there's been a focus on whether or not we really wanted to see something like Rook expand beyond SAF before we would consider it for CNCF. I think mostly that discussion has died down and I don't think we need to do that. And the other one is whether or not Rook would end up being the only way in which storage systems would be orchestrated. As as CNCF projects, and I think both as the testing example, as well as from other discussions we've had, you know, the answer would be if another project came up that wanted to orchestrate storage systems in different ways that TFC would be interested and willing to review that and potentially even bring it into the CNCF. There's been a lot of discussions. I think the most recent due diligence discussion has been chatting with Sage at Red Hat, who I believe is going to be committed to putting some resources on helping to grow Rook. Any other questions? Ben, I missed the last part of the sentence, the putting resources to help do what? To help develop and build out Rook. Okay. Any other questions about this stage? What is your recommendation, Ben? Do you think this needs more time, more storage working group discussion, or are you recommending that the TFC consider initiating a vote? At this time, I'm recommending that assuming the TFC doesn't have any more concerns beyond the ones that I addressed, I think we should recommend a vote. I am fine with moving to a vote. I do want to point out there seemed to be a perception among some in the storage working group with respect to process that the storage working group would make a recommendation before it came to a vote. So I disagree with that as the process, or at least as a required process, but I think that needs to be worked out with the potentially definitely the storage working group and potentially the other working groups as well. Agreed. Yeah, this is the water working groups for aspect, and I think the working groups have been doing good artifact creation and clearing the landscape, but I think we haven't quite got clarity on this particular issue. We're running out of time, so anyone else have any requests for Ben to postpone? My reaction to the last comment is I think it's fair to investigate what water rooms are for, but I think we should be careful not to stall any particular project while we do that. I think that would be unfair to the project. Yes, that's also my feeling, Solomon, on that one. I agree with that. Very brief comment about bandwidth to the TFC. Does the TFC feel like they have the bandwidth to properly and vote in an educated fashion on this number of projects simultaneously, or do we want to just stagger them slightly for that reason? I will space them out for my own sanity. Cool. Yeah, they've also been sitting in DD for quite a long time due to the conference season. Okay, good. So Chris, no objections to Rook going forward, I believe, to a vote. Sounds good. So I'm going to say a few words about NATS. So Derek has just done an update on the NATS document. I apologize that this is not in GitHub yet, but we've actually found the Google Docs format a little bit more useful for the stage of interaction we're at. So what are the issues here? I mean, personally, we've worked, used NATS in anger and are extremely happy with it, and I'm very familiar with the space and think it's a great project. However, I think what we need to do in the DD document is make sure that everybody else has an understanding of what messaging is for, what we keep running into people who still don't know that, what is unique about NATS, and also what trade-offs does it make that mean it might be better than other projects in some areas and just skip over things that they do in others. So Derek, do you want to say a few words about your thoughts on next steps? I mean, I've got my own view, which is that we're very close to finishing the document. Sure. This is Derek. I think Chris pointed out early on on the doc review that the normal process is to file the GitHub PR. So unless there's major objections to that, we'll go ahead and do that to kind of formalize that process that Chris highlighted. To Alexis's point, though, I think some of the early feedback on the doc has been helpful for us, and I tried to interact with those comments this morning. And Colin is also on the call and is on top of those. So I think for us, the next steps will be to do the PR, get some of the comments inside of the PR, and then at the subsequent meeting, look at where we are at that time. I have a request, which is I actually find what is messaging issue somewhat surprising, and maybe that issue does exist, I don't doubt that. But it would help me if that what is messaging text, which is quite voluminous, be moved to an appendix and have the main proposal focus on things like why is this cloud native? Would that be a hard thing to do? I mean, I think the key issue in that is that it's trying to be cloud native. It's explicitly not trying to do certain things that Derek has called enterprise messaging. And I think laying that out very, very clearly would be extremely helpful. The why does messaging exist? Why is it different from networking and databases? Unfortunately, it does come up with some people. There are also tons of confusion about the difference between a stream and a queue and PubSub and everything else. So I would like to see that remain in there, but I agree moving it to an appendix would be fine. Ben, do you have any comments on the why? What is messaging? Because I know that that was a mess of fear request. You're asking me, Alexis? Ben Heimann. That was a mess of fear request? You brought it up when we were in Austin. I think you said that some of your colleagues were quite keen to understand better what was the difference between Nats, Kafka, Rabbit, SQS, et cetera. Maybe I misunderstood. Yeah. No, I don't. I don't really bring in that. Speaking personally, I've had many long discussions of people about the difference between SED and Nats and these other things. So anyway, okay. So Derek, we'll move it into GitHub with you. That sounds great. And please continue to give feedback there, everybody. When it's in GitHub, can you just tell everybody on the CNCF TOC list? Absolutely. Thank you very much, indeed. Okay. So we've just got time for William and Chris to do the last section, which is on Linkadee inception review. Hey, William, are you there? Yeah, I'm here. Yeah, go for it. Go for review and then we'll discuss. Okay. So I've tried just to put a whole bunch of facts up on the screen and leave the marketing stuff to a minimum, although that's quite difficult for me. In this new lifestyle that I live. I think the most interesting stuff, from my perspective at least, is a set of companies that are using Linkadee in production. I've tried to put a list of the ones who have kind of publicly given evidence of that or publicly claimed that fact. There's a fair number more that haven't really spoken about it publicly, but that we know we're using it in prod. And then although the work on Linkadee development work is still primarily coming from buoyant, we've had a fair, fair amount coming, especially recently from the community. And I put a couple pull request examples there of non buoyant work making it into Linkadee. The one, one of them I'll really point on is 1719, which was SoundCloud originally contributed SRV DNS record logic. And then these are additions to their modifications to that code that SoundCloud is then reviewing as kind of, you know, a semi informal maintainership of that subsystem. And then, you know, powers a human genome project. So if you love science, you love Linkadee. I had some questions about the contribution non buoyant contributions. I took a look at that this morning and it looked like three out of four of the non trivial contributors were buoyant in the last quarter. And in terms of members of the GitHub org, it looked like eight out of nine members of the GitHub org were buoyant. Is there, are there efforts to onboard more non buoyant contributors and maintainers? I would love for that to happen. I, are there efforts to do that? I don't, it's not really, it is a requirement for graduation, not to incubation, but from incubation to being graduated to have, I think, maintainers from at least two organizations. Is that something the project needs help with? Yeah, that would be great. I would love to be in that situation. Absolutely. Okay. And our, is investment in Linkadee from buoyant, since they're still the overwhelmingly dominant contributors, is that changing at all with the new investment on your new project? I don't think it's changing substantially. I think the, the project itself, Linkadee itself has the, if you look at the changes that we've made, the changes that have been made recently, they're primarily around bug fixes and, and less on kind of adding new features. But I don't think that's related to the launch of conduit. I think that's just kind of where the project is in its, in its life cycle. Okay. Thanks. I think it'd be also useful to get a little more detail, maybe it's available somewhere on the, those companies that are using Linkadee in production, for, for what exactly are they using it, presumably Expedia is not, you know, booking flights using it, but maybe doing something else. Useful to get a little bit of background on the size and shape of the projects using it. Yeah, some of that we know, like the companies up here, we mostly know what they're using it for. And in some cases, it's quite fundamental. And in some cases, it's, you know, one component of a much larger company. But yeah, I'd be, I'd be happy to tell you what we know. Cool. I guess in the interest of time, I'd ask the TOC will, will hold the vote on continuing Linkadee as an inception on the mailing list, and go from there. Other than that, Alexis, I think we should close out the meeting since we're one minute past. Sounds good to me. What is it, Brian? Go for it. Yeah, that question. Are there specific incubation criteria that we believe Linkadee does not meet? I'd have to review it, but we could potentially either vote on continuing Linkadee as an inception for another year, or combine that with a vote to move Linkadee to incubation. So I could work with William to basically make that decision, if that makes sense, Brian. Okay, sure. Yeah, I think we need to get a bit more info, but I think, in principle, we've got, we've got a wrap. So thank you, everybody, for a detailed call today. See you next time. Cool. Take care, everyone. Bye-bye. Thanks, you too.