 the rest of the guys. I hope they'll join soon. In the meantime, let's get started. So over the last few weeks, we've had presentations from ChabrowFS and Longhorn. We've put information facts together based on the questionnaire that we drafted for reviewing these sort of projects. We have that and presentations, which I think we are ready to submit to the TOC, I believe, all the other questions and queries have been answered at this point. So unless anybody has any other queries or any other things they want to cover, we can forward them to the TOC and they can be candidates for the TOC project presentation next week. Is everybody okay with that? Or does anybody have any strong opinions one way or the other? I'm fine. This is Louise. Oh, hi Louise. I'm fine with the two, I'll second or third it. Awesome. All right, so I'll put the facts together and send the facts to the TOC. Louise has done a brilliant job and helped coordinate the webinar, a CNCF webinar opportunity that came up again next week on the 20th of August and we've put together a deck. So that will be going ahead. I will send out an email to the Slack to we have a Zoom registration that we need to market and get some people to attend the webinar now. But we're going to be covering what the SIG is and some aspects of the white paper and how storage is consumed in Kubernetes and end off with some discussion around the survey to try and encourage a few more people to fill up the survey. On the topic of the survey, we have relatively poor uptake on responses. I think there's about 15 at this point. This week I put a message in the Slack's and the CNCF general and the storage SIG channels. Saad, have you shared this with the Kubernetes SIG? Did you ask Saad? Yeah, I think Saad said he was going to do it. But if somebody else wants to send it out, feel free. Do you want me to send it out to Saad's channel? Yeah, that would be perfectly fine. Yeah, I can do that. Cool. Thanks, Luis. No problem. Awesome. That would be great. Maybe we can get a few more responses there. One thing is, I don't know that such a thing exists, but a user channel might be a more meaningful thing because I think we know what the storage people think already. Yeah, there's a Kubernetes users channel. I can put it there. Yeah, I don't know that there's the channel on Slack. I'm part of it. I can put it there. Okay, that sounds brilliant. Yeah, we've struggled engaging with the CNCF end user forum in the sense that we got an invite to speak at one of their meetings, but it was poorly attended and we didn't really get the whole of the feedback from that. We're just looking for a way forward. If we don't get responses, we probably need to delay it and maybe launch it again at KubeCon or something like that, but there's only so much I think we can push it if we're not getting feedback. We're not getting feedback. That's that. We have had, on the next point, we have had a bunch of discussion to look at a number of different content opportunities that we wanted to work on. Xing was going to put a little framework together for a paper that compares things like Roblox stores and the femoral disks and local disks and disks from storage systems, et cetera, just to clarify some of the options and terminology there. And we had also discussed about writing a database section for the white paper. Sugu, is this something you can help with and maybe perhaps are there other people that you may or other people on the call or other people that you can recommend that could help contribute to the database section of the white paper? Sure, yeah. I think when you go into the functionality in blockstories and database, there's zero overlap in terms of what they do. Who's going to contribute the block storage part? Oh, so the block storage part, the block bit is on the ephemeral disk, et cetera. That's something that Xing was going to be putting together. Okay, got it. So we need somebody for the database because the original CNCF landscape white paper, we kind of said, look, there are key value stores and object stores and databases and we had sort of sufficient content and sufficient skills to cover object stores and key value stores, but we kind of put databases as a scope for the first iteration of the white paper. So what we're looking to do now is take a step forward and actually alter maybe a database, either a database section or a standalone database white paper. So we'd be looking for a bit of guidance there. Sure. Can you send me the links to what is already there? I can take a look and then see the out. Absolutely. I will put this into the, I will put the link in the chat channel. Is this in the Kubernetes.io domain or is that, is it a different one? No, it's, so there's a link to it and there's a PDF version of it, of the white paper in the store, in the CNCF Storage SIG in GitHub. I've also just posted a link to it in the chat channel. Yeah, I think I've seen the document. I vaguely remember reading it. Right. Okay. So there's that and then finally there's a third paper that we were, that we wanted to work on, which is a performance paper. So one of the things we were talking about was there seems to be a recurring theme and a recurring set of questions around how do I, how do I benchmark and how do I run performance tests in cloud native storage systems? And a good starting point would probably be to, to document some of the tools and some of the easiest usage patterns that are available. And I, you know, I'm putting my hand up in volunteering to, to help write that. I'm really interested to hear if there are any other, if there's anybody else who might be interested in working on that paper with me. I'm hopefully looking for, you know, maybe two or three people who can, who can take up different parts of the document if possible. Alex, I'm not necessarily signing up to do it myself, but I'll look around VMware where I work and see if I can find somebody. Okay, that would be brilliant. One interesting thing is, you know, there are a lot of these benchmarking tools, but I think to do this right, we should probably approach it where we get tools in the categories of, you know, intended for block file, object, etc. Yeah, exactly. So, you know, I was, I was thinking that we, there are, there are certainly, you know, things like FIO that do, you know, just IO type testing, but then there are also things like suspension and the variety of other things that do database test workloads and, and there are, and there's the YSCP, which would do key value workloads, for example. But I, but what I wanted to do was kind of capture, these are tools you can run, but also maybe these are some of the kind of gotchas that, that you should be aware of when comparing different systems. So, for example, you know, things like the effects of caching and compression and replication and dedupe and variety of other things and how that affects the, the benchmark results so that people can kind of work towards an apples for apples kind of comparison. Yeah, the, the other thing that I think potentially should factor into somebody making decisions is even something to measure network bandwidth consumption because some solutions vary on their usage patterns of the network. And if you're taking your storage related networking and sharing it with the same networking backing infrastructure that you use for your compute workloads, you can have surprises if you don't really understand what's going on. That's a very good point as well. So yeah, so, so what, what sort of, what sort of impacts does the, does the storage system have on the, on the environment? So, yeah, measuring CPU memory utilization and network, they're all important things to keep in mind too. Yep. So, so basically that, that's kind of the general scope of the paper. And I'm going to try and have a, I'm going to try and break it up into the relevant parts and try and add a few paragraphs over the next couple of weeks. But if, if other people are, are interested, we can, we can maybe have another call and kind of break up the work. So I have, so we have done benchmarks for VITES, both using Sysbench and TPCC. Right. But they are, they were VITES specific. So if you're, I don't know if that, that counts or how that can be incorporated into a generic one, especially that there are not that many database types storage for Kubernetes yet. Yeah. Maybe I misunderstood Alex, but I was thinking a good first step is more just to outline the tools available to do the tests rather than summarizing test results. Right. Because the test results are, that's a huge undertaking, especially since everything you'd go test is probably versioned and people might question the results as they age. Completely, completely. So, so yeah, just to be completely clear, this isn't about publishing test results. This is about publishing a how to guide. So if you want to, if you, if you have two systems in your environments, how do you compare them and what tools you use to compare them and how do you measure things? So, so I think, I think, if you have sort of a bunch of backgrounds on, on Sysbench, for example, I think that there would be a huge amount of value in describing how to use Sysbench and some of the gotchas behind it and, you know, how do you test big scenarios and small scenarios and that sort of thing. But that's that's the kind of thing we want to capture here. Sure. I will actually, I will introduce you to, there's two people who ran the test at PlanetScale. So I'll introduce you to one of them. They can, they can give you the details about how they did it. So, so that part I think would be relevant, right? That would be fantastic. That would be absolutely, really useful. Any other takers? Okay. I think, I think we can start with that. So, so one of the things, one of the things that we were, we were thinking is we're kind of falling behind on some of the deliverables that we want to, to tip it forward. So, you know, things like the papers that we have just discussed now. And I was, I was hoping that we could agree on setting up a bit of more of a useful agenda to move forward with some of these deliverables. Effectively, what I'm suggesting is that we have more regular meetings, perhaps once a week to kind of get a bit of a cadence on some of these deliverables. And we keep two, two of the meetings to be the, you know, the, the general agenda and project presentation type meetings as we have today, but also add, you know, another two meetings, which can be more focused on, you know, discussing the papers or, you know, actively working on the papers or the deliverables that we want, that we want to move forward. We found sort of a weaker cadence to be much better when we were trying to put the white paper together because it kind of kept the momentum going and made sure that everybody delivered something every week. Is it, are you guys up for this? Is this something we can, we can look to do? Yeah, I think it's important not to get so far behind that we have this huge backlog, as we've seen happen on the TOC, so get ahead of it. It always puts us in a better position. But do you mean long-term, always have a weekly meeting or just now? Certainly, certainly for the next, I don't know, two or three months, so we can get some of these papers out of the way and have some good deliverables to kind of publicize certainly before KubeCon. You know, we, our mission is, you know, part obviously from roofing the projects and being sort of an extension of some of the stuff that the TOC does is also to kind of educate and to help the cloud native adoption. So some of these, some of these things are equally as important and I think we've gotten, you know, quite a bit of publicity from things like the white paper. So trying to get some more of these documents out is kind of important in my books anyway. But, you know, I can't frog-march people into meetings, but we do need to kind of keep up some of the cadence, but we can play it by ear. So if things are moving, you know, if things are moving fast on their own, then, you know, we can slow down some of the meetings, that's okay too. Does anybody have any sort of like strong opinions one way or the other in this? No, I don't. I think if we can do what we need to do, so I'm fine with it. Cool. All right. So in that, in that case, what I'm going to do is I'll, I'll send out additional, I'll send out additional meeting requests and I'll try and put an agenda into each one so that, you know, people can be free to join if they want to work on the specific papers that we're working on for those, for those meetings. Before we go into any other items, we do have this, this thing about the selection of a logo. So the, the CNCF marketing team have thrown some sample logos together that like to put some branding on the, for sick storage. Some of them are kind of a bit generic and some of them have container and some more storage specific items in there. If we, if there are, I know Suku has, has given some feedback and Steve gave some feedback this morning now too. If there's anybody who has any sort of strong opinions as to whether to not do something or to do something, please shout, but otherwise we're probably just going to pick one over the next couple of days and sort of try and settle that. We'll discuss, we might just put it in the Slack channel for some quick agreement, but would really help if other people had opinions. I'm going to jump in and say that I don't like the spinning disc logo because spinning discs are going to die soon. Yep. Maybe an animal like a magpie or a squirrel, because they hoard things. Yeah, yeah. I was, I was actually just, just thinking that it might be nicer to have, to have something that wasn't sort of very enterprising or, you know, some abstract box. If it's an animal, it's got to be a kangaroo. A kangaroo. It's the only one with a pouch. Girls have pouches in their mouths, that's where they hold all the nuts. Well, an elephant might be good just because they're purported to have long memories, but unfortunately, I guess Postgres already took it. Yeah. That's the hardest one with animals. There's so many different small companies out there and different projects that have got these animals already. You've got to find one that's not being used. But the nifflr is a completely new animal, right? Something from Harry Potter and the Fabulous Beast. Yeah, like the nifflr from the Fantastic Beasts. Good idea. The trouble is those kinds of companies tend to sue you for using our logo. Wow, okay. Yeah, I kind of agree. I do like the idea of having some sort of animal. Maybe what I'll do is I'll go back to Amy and see if she can get them to pick up some sort of animal thing. I noticed the security seg have picked a logo with some sort of raccoon or something like that on it. So there you go. That might be the answer. Definitely the trend is to go with animals. I don't like any of these that have got containers on. It's just too obvious. Yeah. And to be completely honest, I don't know that we want to sort of restrict this to specifically containers or, I don't know, specifically storage because it's supposed to cover everything from databases and APIs and all sorts of other things too. So yeah. All right. So I'll ask them to do some other stuff. Finally, then, is there any other things that we want to cover or to discuss today? Any other maybe ideas for papers or content that we want to do? Wow. People were definitely more vocal about the logos. I think I'd like to probably start doing a paper, but maybe after KubeCon, on what type of applications should use, what type of storage, things like that. Maybe suggestions, something like if I want to use my SQL, these are the storage systems and why. Things like that. A few weeks of these meetings while I was on vacation, Luis, I think you started or we're planning to start such a document. Yeah, exactly right. I just feel that a lot of, I've talked to a few customers and they're not as knowledgeable about storage as we are. So they don't know where to go. They know that they need my SQL. They know that it needs some database or a CD or something. They say, what do I put behind? Do I use local disk? Do I use network storage? What do I do? And just something very generic. But again, I think after KubeCon, because I'm very busy at the moment, but something around those areas I'd like to start. I think that's an interesting idea of actually a generic paper about pros and cons of the different types of storage for different applications. Yeah, that sounds like a really good idea. Simon, is that something you might want to start helping with now? And is anybody else interested in working on something like that now? I'm happy to help contribute to that. I'm absolutely swamped at the moment. Again, after KubeCon, it's probably going to be a better time for me. Yeah, same for me. We're talking about KubeCon in December here. Yeah, yes. So I don't think we, as a working group, I don't think we can kind of defer until five months time. So we have to find other people then. And I think we need to start now. Well, I think we can at least, I need to wait at least past August or maybe mid-September. Now I understand, but we have quite a few people here. I think we should get started on something earlier than that. Just because if we only essentially start a new year, that's a long way away. So I can try and carve out some time to do some content review or content edition, if I can. I will try and do that. Yeah, and I guess conversely, if we don't feel like we have enough people who have enough time, we should go and find some solicit help outside of this group. If this group is too busy with other things to actually produce white peppers, then we need to find people who do have the time. You know, if we could start like on an outline, I think we could probably do some of this stuff up a little bit better. Like having people willing to take some subsections of it. All right, let's take these offline then. I can work with Simon and Brad, and then we can probably divide the work and conquer it. Okay. Yeah, for databases, we have done a lot of debating on this and we have a few few what we call as viable configurations. But I have a feeling that will be more or less the same for any type of storage, mainly because it's centered around durability. So I don't know if separating out a separate database section would be helpful here. Well, I think we're talking about two different things, right? So the database portion of the white paper that we want to write is about the different database types. No, not whether local or mounted storage. No, no, it's to sort of compare the different types of different types of databases, kind of like how we have a few pages to describe different types of key value stores, for example, or different types of object stores, for example. So that can be a few pages to kind of say, look, there are transactional databases and there are distributed databases and there are no SQL databases. You know, I'm just kind of rambling a little bit, but to cover those kind of options and perhaps talk a bit about some of the complexities of sharding or whatever else. But I think what Louise is talking about is more to kind of say, okay, given a particular application use case, whether it's, I don't know, a database or perhaps a message queue or something like Kafka or something like that, what type of storage fits those type of application use cases. And that actually is, you know, that I think is actually a really, really good idea because it sort of takes the white paper a step forward and the white paper we kind of described the different attributes of the storage system. And now what we're saying is if we take these use cases or these specific applications, what type of attributes do they do they need? And therefore that kind of helps you select which storage system to use. Yeah, exactly. And I feel that maybe unlike the white paper, that this will probably be a living document as we find more and more, I don't think it'll have an end. We'll just have ends for applications that will do one application. And then we'll add more and more as we go because I feel that there's so many applications, right? So storage is changing as well. So as storage changes and we get more features added through CSI or whatever, then that's going to open up different applications to different types of storage back in. So it is definitely a living document. I wanted to clarify some terminology. So I've personally always used the word application for, you know, things that end users use. Are we using the word here to really mean infrastructure, things like Kafka, HCD, whatever, what I've always referred to as sort of infrastructure systems. That's a good point. I feel I'm using the word applications as the consumer of a block or file or object. Okay, so I mean, without wanting to be too pedantic, I think we should use a different word than application for that because application traditionally means the actual thing that users use. Yeah, and people will, that concept will be very obvious to them. So we need to make sure you're right that we don't make them think down the wrong direction. Because to me, an application would be a database or something because it doesn't matter. Yeah, it all depends where you're coming from. But yeah, you all are. We can start it with a definition of what words you're going to use. We can use something like use case rather than application because... I'm fine with that. And is there something in the CNCF, their definition of things that we maybe should be using to tie it all together? So across SIGs? That's not a bad idea. We can certainly, I'm not aware of there being such a thing, but I don't think it's a bad idea for us to elevate. So we've published this white paper and we've got a bunch of terminology definitions and things in there, some of which may not be storage specific, in which case we might want to propose having a CNCF terminology, what would that be? Glossary of terms or something where we can elevate some of these things to there so that we're using common terminology across all of the SIGs and across the CNCF. Yeah, and I just want to ask one logistical question. That we have a GitHub location and the documentation that we're writing is separate from the GitHub location, it's like in Google and it's probably under somebody's someone's Google Drive. When we write the paper, do you mind if we write it as a markdown format in GitHub? That way it's part of the repo and it stays in there? Or do we write it in Google Drive and we just point to it? I think we should write it in Markdown. Yeah, I'm suggesting that. Yeah, I agree with that. I think that's a fairly well used model. One caveat is that often in the early stages of the document where there are lots of comments and revisions and edits and changes and things, people generally seem to think that Google Docs is better for that part of it and then once it stabilizes and we say okay this is v1.0 or whatever, then we dump that in Markdown. We change it at the header of the document. Usually people say this document is closed for comments. The final version is in XYZ location in GitHub and then more minor edits from there can be PRs etc. Yeah, that sounds fine. That way the output goes in the GitHub. That makes perfect sense. Did we not do that with the white paper? Alex, I don't remember. I thought we did. You know, I did convert it to Markdown and I think there might even be a PR and I never actually accepted the PR. Okay. I can push that through. We intended to do that, Luis. We just pushed the right buttons, it sounds like. It's all good. Excellent. This is great. We are on the same page. That's great. One other comment about... Sorry, carry on, Alex. Oh, I was just going to say Pandoc for the win. It was the tool I ended up using to do the conversions. Oh, okay. One other thought around this. So, Luis, I think I understand your proposal, which is to take a bunch of what I'm going to call storage infrastructure systems like etcd or MySQL or whatever it happens to be and talk about good ways to put storage underneath them and maybe bad ways. We also had a conversation a while ago about the next step, which was to solicit from cloud native users how they are using storage successfully and otherwise. And there was a questionnaire and all that stuff. And I apologize. I've been out of the office for the best part of a month, so I might have missed some stuff. A slightly different approach to what I think you're proposing is to essentially say these are some success and failure stories around cloud native storage and try and kind of structure it in such a way that people can kind of read it as a best and worst practices. I don't know if that's the same thing as what you're proposing or different or whether we need to choose one or the other, we can do both. I'm not sure. What you're proposing sounds to me more like a blog, which will be a good story to read. This is a fairly substantial document that I would see as much more closer to our white paper in size and scope than a blog someone would read. I mean, I would have thought that the results from that questionnaire, whatever it was, would actually really nicely feed into this document as these are what people are using and then we can add our comments as to what we think as the SIG, the best way to do stuff with these applications, whatever we're going to call them and then we can see how that ties into the real world cases of people saying, well, actually, we tried it your way and it didn't work that way. That could be actually really quite useful as an informational point to start this off, those results. I totally agree and I think that's with some degree of skepticism, people view some of these white paper-type things as being overly theoretical or else marketing blur by vendors. I think the common theme I've heard from what people want is actual real-world things that have been done and either succeeded or failed. People seem to like that. That's a fair point. I mean, I say we use that information as a starting point, at least these are the applications that people talk about and we can use those as a starting point and work from there. Yes, I was agreeing with you just to be clear, I was not disagreeing. So, Alex, where did we land with the questionnaire and have we got any further towards getting some useful responses to that questionnaire? So, I mentioned earlier, I shared the link to the questionnaire in the CNCF Slack and Sandra Louisa country member now is going to share it with the Kubernetes SIG and the user channel for Kubernetes. But so far we've only got about 15 responses, so it's probably not super useful. It's better than nothing, I guess, though. Yeah, yeah, I guess so. All right, I guess at some point we have to give up and say that there's too much apathy to expect to get in the intricate. Piddy, how do you think it went into crafting that questionnaire? If we could get some people to fill it in, I think it would be super valuable to everyone. Oh well. I'll try and pull together a quick summary of what the output of the questionnaire is. There might be some nuggets that allow us to focus on one or two on one or two use cases. Yeah, I mean 15 is better than zero. Yeah, but I'm a Smith, so this conversation was good, but I'm not sure where we landed. So, if I'm reading this right, I think there were sort of two separate initiatives. One was identify some use cases and sort of go through some of the pros and cons about which storage systems to use for those use cases. And the second one, and that was, you know, what Louise and Simon were meant talking about, and then Quinton, you were talking about something more along the lines of give documents some real life success stories or indeed failures. Is that right? Did I get that correct? It does sound correct to me. Correct. Yeah, I think we were kind of, if I understood correctly, we're sort of considering combining the two in one document. If I'm not mistaken, that seems somewhat sensible to me. But if we don't want to, if we want to have two separate documents, I can understand that too. What if we don't, what if we don't make this, you know, a big thing and try and break it up into smaller things? So, if certainly I kind of like the idea that perhaps if we have a bunch of use cases, we could have a bit more of a community driven effort where people can give experiences, but also, you know, the pros and cons of different storage systems for each of those use cases. So in fact, rather than having one big document, we could have, we could have a directory of, you know, markdown files in GitHub, one for each of the use cases, and different people or whatever contributes to each of those. I think that sounds perfect. And we might, you know, we might be able to get sort of a broader community, raising PRs and contributing information that way. Yeah. Yeah, makes sense to me. Absolutely. Yeah. What do you think, Quinton? Yes, yes, I agree. I think, I think breaking it up into smaller pieces like that is a good strategy for getting contributors. I still think we will need an overall kind of driver person for this effort, who, for example, you know, decides what those areas are that we want to publish papers on, and then finds people to, you know, suitable experts to write the necessary stuff, and perhaps has an overarching, you know, cover document that says, you know, these are the areas that we're interested in. Here are the links to the specific, you know, application. Maybe it's whatever that word is we use. Here's how we think Kafka works well. Here's how we think, etc. can be successfully deployed, etc. Yeah, maybe provide a few templates to, which can be consistent against cases and whatever. Yeah, yeah, I agree with that. Okay. Yeah, if these meetings happen more often, then it'll be easier for us to propose like a use case, and then we can go ahead and take care of those use cases. Luis, did you have a specific first target? Maybe that's a good way to start is to figure out these things, write that document, you know, figure out what the structure looks like, how long it is, etc. And then say, here's a template for all the other ones. And these are the other, you know, six that we'd like to tackle next. Like, let's just picking out of the air, like a Kafka one and a Cassandra one, right? Yeah, those sound like great candidates to me. Yeah, I think we can come up with a format, you know, trying to come up with infrastructure for it, we can start creating these, you can even start with, you know, creating issues, I would have to get, you know, so we can, in next meeting, I think, I'll start sending out emails, but we can start coming up with a process for this so that everybody follows it. Why would we select, I mean, I know we're just picking those two out of the air, but should we define like characteristics of those pieces that would lump other applications and I won't call them applications, other systems. I'd rather, you know, I know that we need to bind it off something, but I think we also need to like have a profile of what that system looks like so that these case makes sense for other applications outside of just these two steps out of the air, because we're going to be asked that, I'm sure. Yeah, we just, that's a good point. I'm trying to figure out how to answer it, but I think that, you know, I feel this is a, not a one-stop thing, this is a continuation, which will continue to get use cases for, and will continue creating suggestions. So if I, I'm trying to view this instead of bottom up and top down, if I am not a storage expert, what do I want to see? I would like to see a location where it gives me some kind of help for the application I'm looking for. And so that's okay. How about, how about we do this? Because I get stats, that feedback that they're in said we should pick the use cases with a purpose, right? And certainly it would be useful to start with something really simple, like, you know, a standalone MySQL or a standalone MyPostgres. And the focus on that can be around, you know, how do we do performance and how do we do application and how do we do, you know, various other things. And that can be a nice simple use case. But then each of the use cases we pick can actually explore one or more of the different attributes as we define them in the white paper. So for example, if we pick Kafka, that can be all about, you know, sequential throughputs and, and, you know, maybe perhaps some of the distributed nature. If we pick a no-SQL database like Cassandra, it can be all about, you know, the data locality and having multiple copies of the data and eventual consistency questions and, you know, those kind of, those kind of discussions. And we kind of use each of those use cases to highlight one or more of those, of those attributes and kind of structure it that way, because that kind of then allows them to pick the appropriate storage system to match what that use case requires. Does that make sense? I'm thinking we should also, we should look at using CNCF projects like Vitas and Tai KV since, and promote those, right, that that's another consideration. Yeah, that's a good point. Even things like Prometheus perhaps. It also makes the, you know, the experts more accessible. We could go to those projects and we can say, how do you guys recommend these things get deployed easier than, you know, some, some other project that may, we may have less closer connection with. Yeah, I like the idea. And a good chance for us to collaborate, because I think some of these projects crave also that interaction to, to know how they should be maybe doing persistence properly instead of just we found this blog and that's how we set it up and maybe it's totally wrong. Yeah, that sounds right. The other thing that crossed my mind, I thought I just mentioned is that many of these, I'll call them storage systems, you know, depending on the actual application running on top of them, the deployment, the best deployment differs substantially. So, so, you know, etcd and Kubernetes is typically backed by, you know, persistent remote persistent disks, which have, you know, poor performance and various other characteristics. But for that specific use of etcd, it kind of makes some sense. One might argue not, but anyway, it is the way it is. Whereas, if you want a very high throughput one, you would, you know, deploy the storage part of it as in a very different way. And the same applies, I'm sure, to Kafka and Cassandra and other, other application or other storage systems that there is no one good way to deploy a given storage system. It depends on the application that's running on top of the storage system. Does that make sense? And that's a fair point. And, you know, that would be a caveat that has to be applied to any, anything we would suggest. It is a suggestion. This is the only way to do it. So, sorry, Alex, very briefly to finish. I just want to make sure we didn't end up with a, you know, this is the good way to deploy Kafka. This is the good way to deploy Cassandra because I think there isn't such a thing. So, so that's, that's correct. So, so what I had in mind was if we, if we, I want to avoid trying to boil the ocean here, right? So I want to avoid having to discuss every single permutation and then taking six months to come up with every use case because that would be wrong too. There is valuing getting something out quickly here. So it's, you know, again, just going back to a simple example, if we pick something like Postgres, for example, we, we can, we can talk about a simple Postgres install where a storage layer is doing some form of replication or, or high availability, but we can also, you know, the natural evolution would be that you'd say, okay, what if you have multiple slaves with Postgres and or what multiple masters or, you know, Postgres level replication and all of that sort of thing. But I'm kind of hoping that what we can encourage is if we, if we do a template with one of permutation of that use case, then hopefully the community can continue to feedback. Oh, and by the way, if you want to use Postgres this way, this is another way of doing it. And we can kind of grow that. So, so I don't think we need to have every single permutation. We can say, if you use Cassandra this way, or if you use Etsy this way or Prometheus this way, then this is one way of doing it. But if this kind of gets traction, then hopefully some of the people in those communities can also then help to contribute to this. I think a good way to start the document would be just to give a very high level description of the different design patterns you can have for storage and do the pros and cons of for those and then drill down into the applications or the more generic application type and how which design pattern would be a better fit for it. Simon, have you have you read the white paper that we already published? I skimmed it. I didn't remember exactly what was in it. No, the only reason I ask is I think that that addresses or it's certainly intended to address the high level overview that you mentioned. Okay, I'll go read it. Different different kinds of storage, the pros and cons of, you know, local versus remote, distributed versus not, etc. And then, so what what I think we're contemplating now is the next step, which is how does that apply? Yeah, I am remiss in not having read that white paper and seeing what's in it because it may just just be an extension of that. But I think to Simon's point, like this white paper is very thorough, and I think it gives people a good education on what exists, but it doesn't necessarily, and I think you're right, it's the practical application of these options. What do those use cases look like from a pattern standpoint? Yeah, I'm not saying that the white paper should only be about patterns. You start it as the patterns, which references back to maybe the original white paper, but in a more succinct way. So it's not as long as that white paper, it's just covered in a couple of paragraphs or a page or server or something. And then you delve down into use cases. Yeah, that makes sense. Okay. Just one last comment, if I may, on the multiple options per storage system versus one. I'm still a bit hesitant to kind of do the one option thing, because people tend to fixate on that and think that that's the way we're saying that Cassandra should be deployed. I wonder if two isn't not much more work and at least breaks the sort of one barrier. And what I'm thinking is that each of these papers, let's take, you know, STD as an example, perhaps, is we start off by saying, you know, typically, or often, one either optimizes for throughput or durability, for example. And here's an example of a good throughput optimized deployment. And here's why. And here's a good example of a durability optimized deployment. And here's why. Hopefully, I totally buy your argument, Alex, that we should not try and boil the ocean. And I definitely don't want to make this too big. But maybe two is like constrained enough that that we can at least get the message across that there's more than one way of doing these things, depending on what you're trying to achieve. Yeah, no, that sounds fine. And actually, you know, there's probably no harm in saying, if there are maybe five or six things that we can think of, actually just list them in the doc and kind of say these are these are other alternatives that that's, you know, are coming next and just happen. Yeah, whatever. Yeah, that sounds great. Cool. All right. Okay, so what I'll do is then, if everybody is is is up for this, I'm not going to put a meeting request for next week, because next week is kind of slightly nuts, because we've got the webinar and a talk project presentation all on the in the same week. So I'll try and schedule it for the week after but we I'll put in the agenda for the things we want to cover. And we can start we can start working on some of these things. And as appropriate, we can maybe break off into into a couple of other working groups or schedule some talk other meetings to to make sure we we start to solve properly. Sounds great. Excellent. All right. Thanks, everyone. Unfortunately, it's time. Was there anything else that we needed to cover before we we got we closed the call? Thank you, Alex. Thank you, Alex. Thank you. All right. Have a good day, everyone. Have a good day.