 Good morning, everyone. Can anyone hear me? Hey, Quentin, I can hear you. Good morning. Yes, hello. Awesome, hi, Jared. So I guess we'll just wait one or two more minutes. And then, Jared, I think you're first on the agenda with your Rook prison. So you can get ready to fire that up when you're done. Yep, we'll be ready to go here, Quentin. So Erin is a contributor to Rook, right? Yeah, Erin has been regular discussions for Rook. I can't remember if she's committed to the repo or definitely been commenting a lot, but yeah. Anybody know if she'll be here today? I'm not sure. No, she won't. Both she and Alex are not here today. So hence why I'm running things for a change. OK, Jared, I think we can get going. All right, sounds good. Let me go ahead and start sharing my screen here then. Try to get the right one here. Let's try to share Chrome here. All right, so can you see the slide deck here? Yes, I can see it. All right, well, let's go ahead and get started. So we were here two weeks ago at the last Storage SIG call. And we started the discussion around the progress and effort that the Rook project has around graduation, the CNCF. And today, we're going to be doing a little bit more formal of a presentation and discussion. So let's hop on into that and talk about what the expectations are here for today. So as I mentioned, we're going to, this is a formal presentation. We have all of our ducks that are out here for the criteria. We've done the legwork. We've gathered all the data. And we are ready to have a formal presentation and discussion. And we want to kick that off today here with the SIG. We will have time at the end for a Q&A session. We have two of the Rook maintainers here today. And actually, I think Blaine is on the line as well too. So I think there's a few Rook people here today to answer any questions that the SIG may have in terms of diligence. And then we can carry that conversation wherever it needs to go for the SIG and the requirements that we want to cover. And also a big thing we want to accomplish today is what, if any, are the remaining or lingering steps here before we can go ahead and call a vote for the TOC to formally vote on this. And a quick reminder that the goal here is that we will be able to accomplish and complete this graduation process before KubeCon Amsterdam which is more than a month away now. So we have a little bit of runway. And this is actually probably more buffer than I normally give myself things in life. So this is a good thing here. All right, so we'll do a quick introduction about the Rook project for anyone on the line that may have not been exposed to it too much before. So Rook is, the whole goal of Rook is to be a cloud native storage orchestrator. And what that means is that there are, there's a whole lot of automation around a ton of operational tasks for deploying and managing and scaling, et cetera, storage solutions to help them integrate into Kubernetes cloud native environments. So we started with just support for CEPH, distributed storage solution CEPH, but we since then had created a generalized framework for being able to add and support many different types of storage solutions and storage providers that we'll get into more here. So you could think of Rook as a set of operators but also a framework and a platform for building and integrating distributed storage systems into Kubernetes. We first were accepted into the sandbox stage back in January of 2018. So just about two years ago. And Rook was the very first storage project to be accepted into the CNCF. And at that time, the sponsor from the technical oversight committee was Ben Heinemann from Mesa Sphere. And then we proposed and were accepted into the incubation stage in September, 2018. And now we are shooting for graduation by the end of March. And it wasn't entirely clear if we needed a formal sponsor on the TOC for the graduation process, but Saad Ali has confirmed that he will sponsor us or call the votes or do what is needed there. And we're very grateful to you, Saad. So quick stats updates here on the project. The big thing here to recognize is that almost every single community, stat, projects, stat, et cetera, has grown at least to two to three X during the time that we've been in incubation. So everything is up and to the right. The project continues to grow and attract more adopters, more contributors, more usage, et cetera. So the biggest one is most of them are two to three X, but the biggest one that really calls out to me is the container download metric, which has gone up 10 X since the original, since we were accepted into the incubation stage. So we've got some more things to call out here, just note the numbers are, you know, multiple two to three X on 10 X, and then I will pass it off to Travis to go into a little bit more depth on some of the specific accomplishments that the project has had since incubation. Yep, thanks, Jared. Definitely exciting to see how up and to the right, all of those stats are, the project has continued to move forward with more and more contributors, more and more adopters. So as far as what the project has been doing since incubation, we've had several releases from O.9, up to our most recent one, 1.2. We've been on roughly a quarterly cadence there, which means our next 1.3 release will be coming up next month. We do have several storage providers, EdgeFS and Ceph are our two graduated storage providers, or we've declared them as stable in the CRs. There are others in AlphaState, so Cassandra, NFFS, UgubiteDB, and Cockroach is missing in this list actually. So there are, yeah, six storage providers, and our goal is to continue progressing them, add more storage providers, provide a place where people want to come for storage and Kubernetes. Our security audit was completed by Trail of Bits back in December, and we've got the report published for that now. We'll see you on the next slide. And yep, one, I think one issue is marked as critical, that was fixed quickly by the team, and continuing to follow up on the smaller items there. And lots of other features and improvements that we don't have time to really dive in today. Next slide, yep, so here's a link to all these, hopefully all these things that we're looking for in the graduation, checking the boxes here. So our formal proposal, Jared opened that PR last night. So there's, yep, there's the proposal doc. Comments are welcome, and I'm sure we'll get plenty of those. Yeah, so otherwise we updated our governance. So we have, Rook has a steering committee now, comprised of three members, basically graduated storage providers at Rook and Edge of S, or Seth and Edge of S have one number each, plus Jared as an independent member of that committee. Eight maintainers from across five different organizations. And then there's a link to the security audit and security disclosure process, the CII badge passing criteria is at 100% now. And then the, I think the most exciting part of this, honestly, was the adopters that we have and collecting that data and just hearing the excitement around, hey, it helps solve all of these problems for storage in production environments. And Jared will dive into some of that next. So yeah, back to you, Jared. Cool, yeah, so the big takeaway there is that with the graduation criteria, V1.3, we have tried to be very thorough and call out every single item there and how we are in compliance with it as well, or how we have met or exceeded all of them. So all the details can be found in that formal proposal that PR that we have opened in the TOC repo PR366. So all those links there on that slide. So I'm just gonna briefly talk about some of our production adopters and some of the key value that they shared with us for what they're finding from Rook. So this is just a slide here that you get when you put an engineer in charge of collecting and distributing graphics. So this is just a bunch of logos from some of our production adopters. Bit busy and we'll move on. So I just want to dive into a couple of them here that I think have interesting stories to tell. And this is by far my favorite part of the process. I really enjoyed it for the incubation stage and now for graduation as well, is connecting to our users and hearing about their stories and what they're finding to be very valuable and interesting and helpful from the Rook project. So I wanted to call out CalIT too. So this is the California Institute for Telecommunications and IT and what they're interesting because they have been using Rook in production since 0.4, I think it was, way before we declared Rook stable and ready for production. They like their whole objective admission is to provide research environments for scientists and researchers, the research environment of the future. So part of their objective is to use new technology and to kind of drive innovation throughout a lot of different industries. So they've been using Rook in production since 0.4 across multiple upgrades. And this is one of the largest known Rook clusters running now that's in the petabyte range with 170 storage nodes. So they've been a big supporter with Rook for a long time and really happy to have them stay, take this whole journey with us. I think that- Jared, quick question. What is the actual underlying storage there? Is it SEF or something else? Oh yes, they're using SEF as their storage provider. Okay, part of the comment is actually, so I've come across a lot of confusion. People think that Rook is actually a storage technology as opposed to something that manages storage technologies. And so one should be a little bit cognizant of making statements like Rook clusters, et cetera. These are probably not Rook clusters, they're SEF clusters managed by Rook and this side is understanding something. Yeah, that's completely correct, Quentin. Yeah, Rook is on the orchestration and control plane side and not on the data path. That's correct. Yes, SEF would be the storage system and on the data path. So you're correct on that. Cool, yeah, I would try as best as possible to help to solve that misunderstanding that a lot of people in the community have about Rook being a storage provider rather than an orchestration provider. Thank you. That's good feedback, Quentin. Thank you. Okay, yeah, and then another adopter I wanted to call out because of the scale of their end users is their Norwegian Labor and Welfare Administration. This is the public welfare agency for the country of Norway that deals with and has end users that are serviced by their services for the entire population of Norway. So like, for instance, one of the services that they're using Rook and SEF for is for a digital document distribution for everyone in the country of Norway. And so roughly with people that have not opted out of that service, they want to receive snail mail instead, I suppose, we're talking with four million users there in the country of Norway. So I think that that is scaled there or that usage to a broad set of people across the entire country was something that I was excited about. Replicated is another production adopter here and they're interesting because they do a SaaS on-prem type of thing where SaaS providers can bundle all of their services and everything they would need to be functional and have their services hosted on-premises into a single Kubernetes distribution with everything it needs there. And Replicated decided to make Rook the default add-on there for storage in those clusters, those distributions that software vendors want to use to ship their software and have it run on-premises and air-gapped installations environments and things like that. So I thought that was, it's a useful, it's an interesting story here because it's not about how much storage is or how big of a cluster is replicated managing for their services, but the fact that they bundle it in as the default storage option for their own Kubernetes distribution to multiple, multiple customers that they sell that product to. I thought that was very interesting. Yeah, we can keep going pretty fast here, but Discogs is another one that is building a one of the most comprehensive and largest online music databases in marketplaces in the world. And so they're servicing millions of users across the globe as well. And depending on Rook storage, Rook storage orchestration. And I believe that they're using CEF as well to the stable CEF distribution. And one of the things we hear from people a lot too is how the capabilities provided by the orchestration services that Rook has is saving them time and money. That it's easier, it's faster, it's cheaper. That provides a lot of, takes away a lot of headaches for storage administrators. So we see that being a common theme. Yeah, I think FinLeap Connect is another person that is this common theme that we hear too. Besides being faster, easier, cheaper. Another common thing we see here is that the dedication and commitment that the Rook community as a whole has provided to making sure that we are backwards compatible or that upgraded migration process is always in place. So we originally thought we wouldn't really, till we were beta or stable to be providing a migration or upgrade path, but we've done that for every single release where people have been able to run their clusters, keep things healthy and not have any data loss or substantial downtime for when they're upgrading across all these different versions, even while we were still Alpha. So I was really happy with the community's dedication to making sure that critical systems are running for adopters of Rook. And we've got a couple more here that we'll just try to call out a couple of different highlights here. So the Center of Excellence for Next Generation Networks, they're up in Canada and they've been involved since Alpha days as well. So they're pretty pleased with the maturity of the project and where the orchestration services provided by Rook have gotten to so far. And there are users of some of the other storage options in Rook, such as EdgeFS and Cassandra, CockroachDB, et cetera. And they're one of the ones that are pleased to have that as an option to augment these services provided by Ceph as a storage option in Rook. Let's see, and ABC I think was interesting because they, what they really wanted to share and they said that the story is that they love telling people about Rook is that they have gone through multiple disaster scenarios, not on purpose, but they seem to have had some bad luck with hardware, data centers, outages, et cetera. And network issues, all sorts of things that Rook and its orchestration services for storage backends was able to handle and keep things healthy and recovered and no data loss, et cetera. So these guys seem to be unlucky with some of the issues that they've run into hardware-wise and natural disaster-wise, but they are very happy with the reliability and stability that they've gotten from services offered by the Rook project. And then the last one here, Geodata, I thought was interesting because they had evaluated Rook quite a while ago, I don't know exactly when, but it was very early on. And then I tried some other storage options in the cloud data ecosystem. And then just recently did another revisit to Rook and are very pleased with the maturity and the progress that the Rook project has made since they first tried it very early on and didn't get it quite exactly what they were looking for. So I think to me that makes a statement about the project, continuing to grow, continuing to mature and solving needs and getting to a maturity level that people can really start taking dependency on it. So that's- Are they using SAP as well? Geodata, I don't know exactly which one they're using actually. That one escapes me and it's in our big spreadsheet. But for the two stable, declared stable storage providers in SAP with EdgeFS in SAP, they're either using one of the other. And I think it's probably SAP, but I would have to look that up. Okay. And in Slack, they're definitely asking questions about SAP, I think that is correct. Cool. Okay, so that's all we had. So we wanted to open the floor to questions from the sake here and get a discussion going or we could take that offline and do that in other formats that you may want. But we are here to answer questions if you have them right now. So one question I had just curiosity. Are these being used just as regular file systems where the apps just write to these files or do they use some intermediary software like a database or a key value store, something that provides a higher level of abstraction? Yeah, good question, Sugu. The, a lot of times what we see is folks using raw block storage and like shared file systems as well so that multiple applications can all write to that your positives compliant file system as a whole. So a lot of times you see direct access or direct consumption of the storage primitives, you know, object storage as well is another one. But then you do see, I don't have a breakdown in terms of the numbers of them, but we are definitely people that are, you know, deploying databases and using persistent volumes that are surfaced by the storage providers in Rook to be able to, you know, have higher levels of storage systems that are writing to these lower level primitives that are provided by the Rook storage services. But it's pretty darn common for people to directly access file block and object. Cool. You do have operated for Cassandra and CockroachDB, right? In Rook. That's right, Shane. So are you expecting for people to use other databases, should they also provide a operator rather than directly consuming the persistent volumes? Yeah, well, I think, you know, in general, the approach that we take here is that we want the Rook project to be able to, you know, have a valuable platform so that if people, you know, the developers or creators of storage systems want an easier story or an easier on-ramp to be able to integrate their storage systems into Kubernetes or Cloud Native environments, that that's what Rook provides. So in terms of what users want to provide, if you need storage and you want a database or you want file block and object storage or whatever you may want there, you know, we are happy for that to be coming from the orchestration services and the storage providers that Rook has integrated. But, you know, using other operators is a perfectly reasonable story as well. You know, there are some pretty solid like the MySQL operator that's a couple of those or Postgres operator, whatever it may be. If you want to run platform services or data services in cluster, you know, that it's totally reasonable to choose the best tool for the right job. But in terms of what the Rook Charter is all about is, you know, providing a home in a framework and reasonable logic and processes and a way for storage systems to not only integrate into Kubernetes but to also evolve and mature as well and kind of follow that template in the game plan that we accomplished with both stuff and EdgeFest to be able to have a reliable and stable, you know, offering within Kubernetes and within, you know, in cluster. Did that's your question, Shane? Yeah. Yeah, something I'll add to that too is that, I mean, Rook, at the end of the day, so we are the management plane, as discussed earlier, and which basically means we have an operator that manages a storage provider. So we have an operator that manages an operator that manages EdgeFest. So a different operator that it runtime manages the individual storage layer. So if somebody, you know, the scenarios are going to be so different. Oh, I need a file and object or block. Well, then, you know, they'll use Cep. They, oh, if they want Cassandra, well, then they can run the Cassandra operator. And it's up to the admin to decide, you know, is this operator ready for production? Can I use it? Our Cassandra operator is still an alpha. We're working on, you know, progressing it so it's more production ready. So someone might choose, oh, it's not quite ready. Let's get the community going there. And that's our goal, you know, get these progressing. So people want to use each of these operators in production. And it's nice that they can independently, you know, have their own maturity and evolution of each project. You know, they each have their, you know, alpha, beta stable declaration independent of each other. So, you know, it's Rook as a home for a set of, you know, common functionality or common implementations for storage providers in Kubernetes environments, but they, you know, have some of their own unique traits. They are bringing to solve their own unique use cases. But, you know, bringing them together in a single home that makes it easier for them to be successful managing deploying and, you know, configuring storage in cloud-native environments. Thanks. If there are no other questions, I have two, but I want to make sure I open the floor for everyone else. Just one question, Kiran here. Jared, is there any information on the E2E coverage or like E2E infrastructure that you have in place in Rook to validate the maturity of the storage engines or the application operators that are added? Yeah, Travis, do you want to talk about that one in our integration testing and all that framework and platform we have? So, you're asking about the CI or, excuse me, test validation? So, you know, how do we know it's production ready? That's basically your question. Yeah. Right. Yeah, the, I mean the, what we have in place in the CI is that, you know, for every PR, every master build, every release build, we run this suite of integration tests which gives us confidence that, okay, the feature is working and it doesn't necessarily, well, it's definitely not a scale test. It's not a real scale test. Well, yeah, it's not scaling. It's not under stress, but it's basic functionality testing. We really get the production ready testing by all the upstream community, people who, you know, have validated. It's working for my two or three petabyte cluster or whatever that the community helps validated from the production perspective. And of course we do other tests before we release, make sure we're, you know, some things that aren't covered by automated tests are validated because we want to ensure everything is working in production when we put out each release. So that's, yeah, does that answer the question? Yes, Thomas, thank you. Yeah, and then one more note on that, that end-to-end, you know, E to E functionality that we have there, that's a common set of functionality as well. So, you know, any storage writer that is integrating into Kubernetes with, you know, through the Rook project has a, you know, all the platform and the infrastructure there to at, you know, in the end-to-end CI flow to be able to dynamically bring up an environment to run the storage, deploy it, to run some, you know, basic sanity testing or more comprehensive use cases, but all of that's, you know, common and something that the Rook framework offers to make it less of a burden to be able to do end-to-end testing for a new storage provider. They can take advantage of that commonality and that functionality that the Rooks, you know, the detesting part of the Rook framework offers. Okay, thank you. Just one more question if I can follow up. It's not specific to Rook, but it's something that I'm grokking with OpenEPS as well. So, CEP is GPL licensed. Similarly, what we use is with a different license. Do users have to have a different kind of mechanism to use CEP or have there been any questions or like notes that you had to address around that licensing? Yeah, from my perspective, I don't know that we've had many questions around that. I mean, when Rook is the management plane, Rook doesn't modify CEP in any way. So we just pick up, you know, CEP's Docker image and then we can deploy that and work with that. So since Rook doesn't modify CEP in any way, maybe that separation has alleviated the licensing question. But I'm terrible at licensing. So maybe somebody else can answer that better. Thanks, Travis. Yeah, so I had two questions. Maybe I'll just ask them both in serial because I'm in a noisy environment here. So the first one is in order to add new operators for new storage backends, how much of the end result is actually the new operator versus stuff provided by CEP? So I mean, on the one extreme, CEP is just a collection of operators. On the other extreme, CEP does like everything and you write a very small driver and CEP does everything. So where do we sit on that spectrum? And the second question relates to kind of high availability and disaster recovery kind of stuff, as you pointed out earlier with some of your existing adopters, when you really, really, really need to rely on CEP is when things are not so good, when you're having to restore backups and handle pretty bad situations sometimes. And so how resilient is CEP, sorry, not CEP, Rook itself, I might have used the word CEP instead of Rook a few times, they're my apologies. How resilient is Rook to these kind of things? So what sort of storage backend does it have, for example, how resilient is it to really bad situations and clusters like network overloads and these kinds of things, which is when people really rely on Rook to keep their storage healthy? Yeah, thanks, Quitten. I'll go ahead and take the first question and then I'll defer to Travis for the second question. So in terms of authoring or creating a new storage provider and then integrating it in with Rook, in full transparency, I would say that on that spectrum, it is more towards the side of having to do some unique work for that particular storage provider than I think we want to be at long-term. The initial investment of the Rook framework, we took a lot of lessons from CEP and it refactored them into that common functionality and some API types and utility functions and testing framework and build and packaging and all that sort of stuff. And have that available for use by storage, new storage providers. But then the side around, what does it take to manage your particular storage provider and go a little bit deeper into it? Like the storage placement and selection stuff is common, but then operational tasks like failovers or doing backups and restores or maybe some policy stuff. I think that's where we want to make more, we need to make more of an investment going forward and continue to build on the common general framework that we've done, but currently Quentin, the new storage providers do more of that themselves than I think we want to be at long-term. So it's an ongoing investment there. Travis, do you want to talk about the resiliency and disaster recovery and stuff like that? Right, yeah, one more comment to add to that maybe. I mean, at the end of the day, a storage layer like CEP or EdgeFS, they have very individual needs as far as how they're orchestrated. And so even with common refactoring or common helpers to create the operators, most of the time will be spent on what that operator, that storage provider needs to orchestrate it. So it's not like we will have a common Rook API that lets you deploy anything someday, like that's just not the goal. But yeah, any other questions on that before I go to the disaster recovery? Okay, so as far as resiliency, my Rook builds on everything Kubernetes that we possibly can, that we rely on starting up, running pods, like we don't start, we don't rely on running container cells while we created deployment, which then manages the pod lifecycle for us or a stateful set or whatever, whatever the Kubernetes resource that will give us the reliability in that scenario, that's what we manage and orchestrate. So the Rook itself implements, depending on the operator, a higher level sort of help check for the storage provider, like one place we have in CEP is well, the CEP has these Mons, which are basically the brains of the system that need to maintain quorum at all times or else the storage platform is down. So Kubernetes will make sure that those demons or those pods keep running and will restart if they fail and things. But if somehow they get stuck and nothing or the basic help check or live-ins probe isn't working, well, the operator will manage that and say, oh, they really aren't responding, let me go fail it over and start doing. So I mean, at the end of the day, we use everything Kubernetes we can and then add at a higher level where there's a need. There aren't very many places we do that as far as the higher level help checks, but at least a couple of critical places that is what we do. It's something too that I'd add on that is that, the Rook project has been around for three years now, over three years now. And the lessons that we've gotten to learn for being exposed to a pretty large community running Rook in a very large variety of scenarios has been invaluable to be able to increase the robustness and the stability of your orchestration side itself on that control plane side, that it wouldn't have been possible to get to the amount of stability and reliability that we feel we have now without a very engaging community and being happy to share their feedback and help us identify and isolate and fix some of the reliability issues that we've solved over the last three years. And then one more thing on that is that Rook not being on the data path has been a nice separation of concerns as well, where driving stability and robustness in the control plane and orchestration services that Rook provides is kind of an independent effort versus the maturity and stability of the storage providers and storage solutions themselves, which is the nice thing for things like staff, which has been around for multiple, multiple years and gotten to independently increase and a graph on maturity and really high stress situations as well that could be leveraged by Rook and not having to rewrite everything by Rook focusing on just the control plane and orchestration side. Yeah, one more comment that we do even when if Kubernetes itself fails and the whole cluster just went sideways, we have guides that require a lot of manual processes today, but since the data is persistent to disk, many times people are able to recover their data when they restore Kubernetes, the data's still there and they can bring back up their data. Yeah, it's a little iffy if like Etsy is gone and Kubernetes is gone, but we do make a best effort to help people recover even from that level of disaster. Oh, thank you very much for the answers. What I was actually sort of angling towards is like presumably Rook itself has a persistent store layer and is that, for example, redundant and highly available for boys at a single instance bicycle database? The operators themselves are actually stateless. Everything that they need to store is at the management layer is just, it puts it in basically an SED through secrets or config maps or similar constructs. Okay. Yeah, there are. Perfect. Thank you. Okay, unless there are any further questions, I think we can call that a wrap. I'm gonna propose that we... Sorry, was it another question? Sure. Yeah, so I'm very new actually to the storage. So my name is Dmitri. It cannot be seen from my name there. And so I just started actually working at Hitachi. I was asked to integrate some kind of storage solution with Kubernetes cluster and so if I am to create, for example, some sort of a test CSI driver, just something that writes the, some kind of demo CSI driver and just writes the data to a file system exposing a directory as a volume, for example, something trivial as a demo project essentially. And then if I'm to integrate this with Rook, how much effort it is for me would be today in terms of the, is there a sample project that I can use right now to cut and paste and then modify to, modify it so that I can integrate with Rook and Cep just as a demo version. How much lines of code would that be? Roughly speaking. And considering that I know little about Rook and Cep and the internals, how much, yeah, how many days, for example, that would be for composite software developer. Yeah, I'll take that one, Jared. Yes. Yeah, okay. Well, the first question I'd ask is, do you need the management plan or do you need an operator to manage to manage what you wanted to play? In the case of a CSI driver, you probably don't need an operator, but it depends, maybe you do. So if you need an operator, then, yeah, then you could come to Rook and it could deploy the CSI driver and whatever other that you need for your storage layer. And it's hard to say. I think like Jared, when you created the CockroachDB operator, it was like a couple of weeks effort, as I recall. So, but then it really depends based on the needs of the storage layer. So I don't think there's a short answer to that question. Yeah, I don't actually have an answer to that either because so it's a little bit of a research project. And so in order for us to understand what we need, we need to try starting from a very basic sense, like creating a basic CSI driver. And then installing all of this inside of the Kubernetes. And then possibly try and Rook. So the point is we don't know yet what we need, especially for me, who is very new to that, just a couple of weeks, I was tasked to work on that. And that's not something I worked on before. So I might need it. So do I have to have Rook, for example? I mean, what are the, I mean, I joined a little bit later. So I'm sure that there are benefits of Rook that you guys mentioned that I missed. But just want to estimate the amount of effort that's required to integrate, let's say demo storage into the Kubernetes using the Rook. I'm going to suggest that we take that one offline, if that's okay. I think the Rook team probably need a lot more information than we have time to impart now to give you a decent answer. So I'm going to suggest we take that to the Rook mailing list or the Rook Slack channel or whatever the team thinks is the most appropriate there. Is that okay? Yeah, yeah, sure, of course. Okay, great. So I just wanted to wrap up the Rook part of the discussion and then I think Saad's got some updates for us on the harbor due diligence that he did 10 days or so ago. So I'm going to suggest unless anyone has any alternative suggestions that we, so we have four weeks roughly until KubeCon. What I would like to do is have anyone raise concerns they have, obviously today, I haven't heard any concerns. I've heard questions, but no major concerns. I'd say let's give that another week until next Wednesday. If there are no concerns, I think we will make sure that one or more of the TLs look over this, but I have not seen any holes in the due diligence that's been performed up to now. And then we call a vote, unless there are any major objections in less than two weeks time, just to give the TOC more than two weeks before KubeCon Europe to finalize the vote. Is that Saad's reasonable? Any objections to that? Yeah, that sounds great to me, Quinton. And I definitely really appreciate as well the attention to our desired timeline as well. That's really great that you're willing to work with that and hopefully try to help reach this goal that we have for being done by KubeCon answer dams. And so we'll, and we definitely have time to invest too after this to answer more questions, to address any concerns. Travis and I are very available and willing to engage and continue driving this. So thank you for all the time and effort that everyone has put into this so far. And we're very grateful for all of that. Thank you very much. Thank you, absolutely. And thanks again to the Rook team. You guys have done a really great job of crossing all the T's and dotting all the I's and making it very easy for the other people involved. Cool. I'm gonna have to drop off in a few minutes. I'm actually gonna hand over to Saad. Now Saad, are you in a good position to give us a quick update on the work you did on Harbor? Yep, I can talk about that. Awesome. And if you don't hear from me again, it's cause I've dropped off the call. Thank you. Sounds good. So yeah, I was tasked with taking a look at Harbor. Harbor is a container registry. They are able to run on various different platforms unlike a lot of existing container registries. They can be deployed on existing cloud providers. They can be deployed on prem. And so the ask was for SIG storage, the NCF SIG storage to take a look at this from a storage perspective. They're looking to graduate and see if they had any concerns. So I took a look at it and they have two storage dependencies. One is application data. And second is their images and charts data for their application data. They use storage classes and PVCs when deployed on Kubernetes, which is great. Means that it's extensible and can leverage whatever storage the cluster administrator has set up. They also support object storage as an optional thing instead of storage classes or PVCs. And they provide a number of different object storage backends, Azure, GCS, S3 that users can use instead. And they're not required to use object storage. It's an option in addition to storage classes and PVC. So no concerns there. For image charts and data, they depend on a Postgres SQL and Redis cluster to exist on the cluster somewhere. So actually I believe Michael clarified that their deployment will actually create the database and the Redis cluster if one does not already exist. But if a customer wants it to be HA, they need to go and deploy it themselves. So I just wanted to call that out as something that could be an issue. I know in the past that the TOC has raised concerns about kind of external project dependencies that are non-neutral. That's not a concern that I have. I think the only concern I had was around deployment of these dependencies. And it looks like at least out of the box, they have a basic deployment that could probably do better by making it easier to do some sort of HA deployment. And it looks like that is on their roadmap. And I think Dan Cohen jumped in and said, hey, maybe you should make the DB layer extensible. And that was something that Michael Michael said that they might maybe consider. So that was kind of my evaluation. I'm not sure what the next steps here are, but I think it goes back to the TOC or we as a SIG make a recommendation. Any thoughts? I'm gonna speak to some of that. It would be best if the SIG made a recommendation in some way, preferably in writing, that would help. And it can be over in the PR. Well, you asked and so I wanted to put boundaries on it, but that would be helpful. So what do you folks in the call think? Any concerns, objections to proceeding with an okay from storage side? I share your concerns about the default deployment being non-highly available, particularly for a container store registry. That's kind of problematic for a graduated project for incubation, that would be totally fine. But for graduation, I think we just need to call it out very clearly. I suspect that that may be a blocker for graduation from the TOC's point of view. Yeah, that's my sense of it. Okay. I think I'll put that concern down and we'll go ahead and pass it on to the TOC and they can make their call. I mean, one counter-argument that might come up is for a very long time Kubernetes was actually not highly available by default either. Yes. And I don't know to what extent that is solved today. The default deployment is actually highly available at CD cluster versus back by single volumes. I don't know if you know what the answer to that is. I think the nice thing about Kubernetes is it does allow you to deploy at CDSHA out of the box. Okay, that's what I was hoping. It wasn't always the case. I mean, you could do it manually, but it wasn't the easy way to do it. But if that's been resolved, yeah. I would say that we should make sure that it is easy for people to deploy Harbor in a highly available way. And it doesn't sound like that is the case. So if high availability is a concern, the corollary would be avoiding data loss, durability. If there should be no data loss on node failure, right? So that they probably go ahead and end. Yeah, I think those are two kind of separate concerns. I mean, the one is does your container industry become unavailable, which today by default, it does. And then the second part of the problem would be do they do does one lose data? But I think the first one is on its own, a big enough concern. Which is actually more, it's a bigger problem. Availability, you get it back eventually, but data loss is more painful. And any other questions or concerns regarding Harbor? If not, I'm happy to leave this to Saad to communicate with the TAC, make it clear what this restriction is and make the decision there. I definitely would like that to be resolved in the near future, whether or not it graduates now or whether we delay graduation until that has been resolved. Yeah, I'll update the existing issue that exists with the concerns that we have. And I think that should be sufficient from the SIG storage side. And then it'll be up to the TAC. Yeah, bearing in mind the large amount of that will rely on you Saad, because you're there and you're also representing storage. Yes. Okay, any other comments, concerns about the Harbor review? All right, back to you, Quinton. Okay, that wraps up the agenda for today. Does anyone else have anything else they would like to add? All right, we will get the decisions that were made today documented. Please get your questions regarding Rook in the next week, before next Wednesday. And then we will put that up for vote. They're giving three weeks before KubeCon. And similar Harbor, I think we can consider that due diligence essentially completed. And some of the issues have been raised. So if you have any others to raise, please do those in the next few days because that will probably also go up for vote before KubeCon. Thanks everyone. All right, thank you, take care.