 Hey good morning everybody. Good morning. All right 8 a.m. I think we'll give it until a few minutes after to make sure we get some most regulars in here before we get started. Hey Ben, you out there? Yeah, I'm just going to share my screen folks. Okay. All right, can folks see my screen? Yep. We've got a lighter agenda today. I think the one thing I did want to add to the agenda was future topics, which it sounds like people started capturing the next week. Let's jump in. So, Clint, I think you added a couple of these. Yep. I have that deck that we created there. I don't know if we want to bring that up and walk through it with the agenda should pretty much line up with what we have so far in there. Wow. The thing I wanted to open up was the work voting and the test voting and the status of that right now. I think that I counted it last night and out of the nine TOC members, I think that there were four that have binding votes on Rook and I think that it needs to get to six. Is that accurate from your assessment, Ben? Yeah, I think two thirds is right, yep. There was a technicality around voting. The new email system was actually turning plus ones into likes on the post. So Chris is following up on that. I saw Ken Owens just fix that. And then Camille is about to fix it. So that was what's going on there. Yeah, so it may have that six majority then at this point. It looks like it. Okay, cool. So it sounds like we'll have some type of announcement one way or another on that pretty soon. It's good. Excellent. How about the test, Ben? Do you know anything about the test schedule for voting? I think that was invited, right? Yeah, I'm not sure exactly how Chris is doing it. Maybe he's not trying to inundate too many votes simultaneously. So he's just got two open right now, Rook and Lincoln D. That's my guess. I'm not sure if he has an algorithm. My understanding is that the test is next after Rook, which will likely be next week. He'll call the vote or start the thread next week. Cool. I think, you know, for anybody on the call, if you have any feedback that you're looking to, that you think you want to contribute to that process, like please do check out the poor requests inside the CNCF TOC and, you know, add your comments and perspective in there. That's one of the key things that the TOC members have been asking of us as a group and us individually is to make sure that we help in the vetting process and any type of perspective that you can provide is welcome in those poor requests. So there's still an opportunity to do so. Any comments or anything else on the voting? That's pretty much all I wanted to cover for it. All right, is enough. The next item on here is the White Paper update. At the end of the call on our last session we had, I think Mike Rubin asked me for an update on where we are with the existing White Papers that we had discussed working on. And, you know, this is something that, you know, we've had, I think the TOC has been discussing internally and I think the consistent feedback from the TOC has been that, you know, what we're describing in the White Papers and some of that consensus that we got to in terms of what cloud-native storage could be is actually aligned to what some of them are thinking about, you know, kind of changing or updating some of the general cloud-native terminology to. If you look at the charter of the CNCF and you start reading the description of cloud-native, a lot of it's focused on microservices and it's focused on, you know, kind of a Kubernetes perspective from a little bit ago. And I think generally what's being discussed is, hey, that kind of information about what cloud-native is probably needs to be updated and it needs to service the foundation for what any of these sub-papers are built on, whether it's cloud-native storage or serverless or cloud-native networking. And so that's something that the TOC really needs to work on and provide so that we can actually start building on top of that type of perspective. The other thing is that the TOC has been, they understand that they haven't been clear with expectations about what they've been asking for, whether it's individual contributors or the working groups themselves and they're going to work on trying to be more clear. And for now, you know, consistent feedback from them is, you know, we can have our meetings with SWG or the working groups, you know, we'll discuss, you know, ecosystem things and whatever topics that we want to. But, you know, the contributors to the, in terms of the bedding process, and the TOC is definitely asking for individual contributors to be involved to help, you know, vet the projects and provide different perspectives on, you know, how they're going to be relevant or not. Now, generally for me, though, I think that, yeah, I feel good about what we did in terms of those discussions and some of those email threads we had. I thought that we actually came, you know, we created a pretty good awareness or understanding of, you know, what we all thought cloud-native could be or some work cliffs in terms of cloud-native storage. So I'm happy with kind of that work we did because it definitely provided me with a pretty good perspective on it. And, and I think that, you know, we'll build from that and, you know, pick up the white papers whenever the TOC, you know, asks us to do so and they have clear expectations. Any, any comments or feedback on that? Ben and Dad, are we good? Oh, no, that's great. I realize I need to. Yeah, no, that's great. Cool. I need to unmute. Okay. So the next piece is a little bit of CSI update for everybody. So not representing the CSI project, but I'm doing this as a little bit of an intro to the next topic. You know, I put on the agenda to discuss Rex Ray today to give you guys a little bit of a understanding of, you know, what the future of Rex Ray is, and it has a lot to do with this CSI project. So I thought it was important to just do a quick kind of brief CSI update for that. So that's, that's what this is. And that's why we have it here. The, you know, CSI is obviously, you know, in the category of cloud native storage interoperability. It was tagged at a 0.10 for the spec back in December. So, you know, thank you to the CSI orchestra or the orchestrator team and the community. Tons of work last year to make CSI happen and to get it, you know, to that stable zero to one tag. There's two limitations that we have so far that are that are public. So the two are Kubernetes and Mesa's and they have their public documentation that describes how you can get those things up and running. So that's, that's kind of excellent news that we've got some early implementations from CEOs. There's also other implementations. I think the Kubernetes are the cloud founder team also has some progress. I'm not sure about the dates, but I'm sure we can get that info from Julian at some point here. So exciting work from a CEO perspective. We've also got tons of, well not tons, but we've got early plugins for on the 010 site as well. So there's a driver page under Kubernetes, which shows all the different drivers have been created as part of the Kubernetes CSI project. And then you've also got Mesa's fear who's created their own initial CSI driver. So, you know, we've got working implementations end to end of plugins and CEOs. So that's, that's a great thing for those early phase of the CSI project. The next thing in terms of like an action item for anybody who's out there who's looking to get involved in storage, you know, in the cloud native ecosystem, I think the biggest thing is going to be this face to face it comes up. But actually before that, there are, there's monthly meetings that happen for CSI. And you can find that at the GitHub CSI page in our community. So please join there if you're interested in collaborating. But there's also a face to face it's going to be coming up or we have a link to the agenda in that deck. And the face to face they're going to cover topics such as, you know, what's going to be in CSI zero dot two. And I think the most immediate and probably the most important topics there are going to be the CSI implementations themselves and how they can make them better how we can standardize on tooling and validation, etc. So it's really critical stuff that's going to be discussed there and I encourage you guys to join that. And that kind of leads into where we're getting with this very next phase of CSI, which is, you know, making sure that we can actually start developing these plugins and easy ways. All right, next slide then. So what is CSI for for anybody who's new to it. On the left side of this diagram we had this, you know, environment where there there were many integration points that you would pursue if you wanted to be relevant to some of these cloud native orchestrators. So you had, you know, the Docker volume driver interface, which I think was the first one that was created. You've got DVD CLI, which was a CLI implementation into the Docker volume driver interface that Mesa's used. You've got Kubernetes flex and you've got cloud foundry and they actually implemented a early lib storage client for their, their interaction. So you really had four different ways to integrate storage across the CO's and that all turned into one thing, which is this new container storage interface project. So it's a great thing for, you know, for the user experience in cloud native. And it's really important for ensuring the interoperability works between CO's and storage. Next slide then. So what's like, how do you actually be relevant in CSI, you know, from a simple perspective, like we're connecting apps to, to storage, right, that's those are the black boxes. But in the middle, there's a CSI interface and there's two implementations for CSI that we focus on. One is the CO side implementation. And the other is going to be the plug inside implementation, which is for the storage providers. And those are just, you know, simply gRPC implementations. Next slide please. So the, the idea is, you know, we need everyone to create these, these, these drivers, but there is a lot of work that actually involved in creating a great driver. And that's where Rex Ray comes into play. So, so Rex Ray is a cloud native storage orchestration engine that's been around for a couple of years. Its inception was around the Docker volume driver interface time, and then it, you know, moved forward kind of following the ecosystem. Most recently, we've made changes to Rex Ray to architecturally align it to be a CSI native implementation. And what that means is that, you know, the focus of Rex Ray is going to be providing value on top of any CSI drivers that are created essentially kind of a middleware layer. But it should be transparent to the consumers. So anybody who's using storage with any of the CEOs would be able to, you know, fire up a plug in or driver, and they may or may not even know that Rex Ray is running that driver, but it should make the experience for them great. So to cluster providers and operators, it's going to mean that when you actually start a plug in or a driver, you're going to have a great user experience or a great provider experience. So the instructions for the CEOs, the packaging relevant to the CEOs, that's all going to be handled by Rex Ray and it's going to be consistent across any of the storage platforms that Rex Ray is is packaging as storage drivers. And then relevance to the storage projects and products. You know, if you're a storage company out there or you have a storage project and you want to be relevant to CSI, Rex Ray is going to be the least friction approach to creating a great CSI implementation. So if you've created any of the Docker drivers or flex plugins or anything like that, I think you probably realize that there's a lot of common code that's being reused and common tooling that not reused, but that's duplicated across a lot of these different So whether it's the, you know, that type of, you know, common tool set or common packaging or processes, or whether it's a simply documentation like, you know, that's all redundant information that you know we want to try to simplify and standardize to help create a better experience. So you get that and then also there's going to be enterprise features that are built into the Rex Ray middleware, Rex Ray framework that would be able to provide to any of the CSI drivers that are created. And then the last point is that, you know, if you build a CSI driver and Rex Ray is packaging it, it also has an interoperability layer so that it can actually help integrate or it will integrate against existing interfaces like the Docker volume driver. So create a CSI driver, Rex Ray packages it, and it's relevant to today for any CSI implementation CEOs, and also tomorrow or before or whenever for any of the existing interfaces like the Docker volume driver interface. Rex Ray architecturally shifted to CSI about four months ago or so, even before the 01 tag of CSI, we've been doing this for a bit. It's already got 15 CSI drivers out of the 15 drivers, I think that three of them are at the 01 stage, and then the other 12 will seem to be moved to 01. Once we have the next release of Rex Ray. Alright, Ben, next slide. So what does this look like from a consistent packaging perspective like what is, what's the target from our perspective. Well, today, like in, you know, with CSI, everybody's going to create their own plugin, how it gets packaged and where it gets shipped, and how it gets ram, like it's, there's nothing in the specification that actually determines that. And that's one way to think about this is like, you know, the CSI spec is going to define just the direct interoperability between storage platform and a CEO, but it's not going to define the user experience spec, like what is what is really expected to really help this project be successful. And so from our perspective, you know, this is a pretty good example of where we think things need to go. In the, from a Docker volume driver perspective, it started out as, hey, everybody create their own process or their, you know, their own app or tool or plugin or volume driver, and you know, people are going to run it in any way that they want to. And then what we moved to after that with Docker is Docker managed plugins. And this is where, you know, you took this, the processor tool, you package it up as a container. And all of a sudden there was a standard way that you'd actually deploy and run the plugin and the easier experience was much, much better. And that's essentially I think worse CSI has to go as well, well, and this is just showing you what that looks like from a Docker perspective. So from a Docker hub on the left side there, you see that Rex Ray, the Rex Ray repo itself or the Rex Ray org has, you know, 12 or so managed Docker plugins that are all containerized and very, very easy to get running. Okay, next slide then. So how do you actually create a Rex Ray driver. I hope it's been clear so far but the only thing you actually have to do is create a CSI driver. Because we are a native CSI implementation and we use CSI drivers in the back end to actually talk to any storage platform. So what I, you know, if you're, if you're interested in this, or if you want to collaborate, the way to start down this path is to look at the go CSI package, which is something that that is going to have what we'll call interceptors, which I'll describe in a second. But also is going to have kind of a standard template for how you create a plugin that Rex Ray would be able to communicate with. There's one kind of simple requirement that makes it Rex Ray compatible. And that's all kind of in the go CSI package. Once that happens, once you create a driver, those drivers are not submitted to the Rex Ray code base at all. Those drivers are actually kept in a separate repo. And that might be Rex Ray slash a driver if it's going to be a part of the Rex Ray project. Or it might be something that's held within your projects repo like CSI dash, you know, my, my storage platform, the packaging of this the Rex Ray tool with your driver happens separately from the creation of your driver itself. Another key point here is that the the Rex Ray architecture pre CSI was focused on some what we'll call Lib storage. I think that, you know, a handful of you are probably familiar with what that is. Essentially Lib storage was a similar, it had a similar goal of CSI in terms of creating a universal API. And so all of the Rex Ray drivers in the past were Lib storage drivers. Now, as of three or four months ago, all of the drivers are being moved to being native CSI drivers. Alright, next slide. So how does it actually do this? How does this all work? I think a pretty simple visual depiction of it. You've got in the middle there on the left, the Rex Ray engine. You've got the ability to advertise this this kind of northbound incoming interface for Docker million drivers, but also at the same time any of the CSI providers. And then the back end communication happens to the storage platforms by way of this CSI driver. So so in summary, like Rex Ray is going to provide the common easier experience, and it's going to package up any of the CSI drivers. It's going to, it has a pretty well tuned CI CD process for publishing the actual artifacts in different places. And then it's going to provide a layer of middleware to or it's actually going to use the middleware within your PC to add value on top of any of these CSI drivers that are created. Alright, next slide. So, so the, I mean, the purpose of it right now, or where we're at with Rex Ray is that we're really trying to support the CSI ecosystem. I think that the CSI team, you know, solved a huge technical challenge of getting storage closer to applications and making sure that that the interop was was better than it was before. We also settled the challenge for storage companies and storage platforms because we need to focus on one interface versus having to pick and choose and divide our efforts and have and have, you know, not as as great implementations at that point, but there's still work to be done to make this good. Getting to CSI stability is really going to require that people use the plugins. And for example, like if, if I'm thinking about the Kubernetes world, I've got a lot of entry plugins and Kubernetes right now. And, you know, why would I go and use a Alpha CSI plugin, if the entry plugin works just fine. Right. So, so getting people to actually start using these CSI plugins, which is kind of a key point of maturing the the adoption of getting CSI adopted and maturing it and getting it to stable, it's going to require that people kind of take that jump and use these new plugins instead of the entry plugins and Kubernetes. And to do that, we're going to have to make sure that it's got a great user experience. So, you know, Rex Ray is setting that setting us up for that. I think that the standardizing the implementation of the plugins is kind of a key to helping, you know, mature and help the ecosystem move forward. And I think a key measure of success is like, for example, the Kubernetes ecosystem, if someone decides to use an equivalent CSI driver, instead of the, instead of the entry Kubernetes driver, and I think that we've done the right thing and we're moving in the right direction. Hey, Clint, just a question on this. Sounds like your, your, you know, your goal is essentially to help see adoption of CSI with Rex Ray. Is there any reason why this work doesn't go into the CSI project itself? I think that up to this point, the CSI project maintainers have discussed it to say they want to make sure that the ecosystem grows around the project before they think about bringing things in. There is, I think there's many things from a coding perspective that the CSI project would be interested in. And I think the short term of what they described in the roadmap is the validation tooling, and not necessarily like a, a kind of a middleware layer like this. I think that's one way to differentiate it is like, what CSI as a project is going to bring in is going to be things that are very, very specific to the specification, and things that are abstract of COs. And I think that's, you know, it's clear from their intentions. And then something like Rex Ray is going to be that layer on top, which is helping standardize the user experience side, when it comes to how you actually consume these, these drivers with the COs. Does that make sense? To just add to that a little bit, that's exactly right. We wanted to keep the spec as minimal as possible initially, especially now that we're pre 1.0 and not add libraries and additional packaging requirements. We want those to emerge naturally. Rex Ray is a great example of something that's coming out naturally. We don't want to pick a winner here. And then once the project matures and there are go to libraries that everybody is using, we can consider pulling those into the CSI project itself. And that's, that's actually kind of what I'm, what I'm asking about. If Rex Ray emerges as the, you know, the best packaging for CSI, then doesn't it eventually become part of the CSI project? I think that they're like, if I think about the next couple of years, you're going to have CSI, right? And you're going to have these more, the more religious side of CSI or the more direct implementation of CSI as, as these libraries. And then you're also going to have, you know, you're going to need some type of tooling that provides a layer of innovation, which moves somewhat separately from CSI because there's going to be things that you want to add, you know, that can add value to these drivers that CSI is going to say, Hey, like, I don't know if that belongs or not. Let's, let's see, let's see how it goes. Let's see if the community, you know, cares or not before we actually bring that into the spec. And as an example, one of the things is encryption. Right. So one of the things that Rex Ray is going to add to, you know, what it's going to provide to any driver is it'll actually bring a, once you actually advertise a device, it'll advertise a shattered device from DM Crypt. And then you'll have, you know, in place encryption for any of your CSI drivers, like that may be something that the CSI spec could call out in the future, but, but I don't think so because it's really just an interface. And so I think that there's going to be this like give and take between as innovation is forward, you know, where you're, we're going to want to do things that, you know, add value on top of CSI. And sometimes those things are going to end up in the CSI spec, maybe as additional methods or what have you or maybe code, but other times it's really going to be long living outside of it. So I kind of see a world where there's, there's definitely a need to like enhance and augment and contribute things to the CSI spec. But sometimes that's sometimes things like don't belong there and should be a little bit abstract of it. And I think that's where Rex is going to play. I think that that this is really cool stuff because CSI in a way is the primitives of a young system. And I think that anything that we can add on top really helps. So. Cool. All right, Clint, could you talk about what the, could you talk about what the meat over does. And the context of restaurant. Yeah, so, so the GRPC has these interceptors. And, you know, when you're building a plugin, a CSI driver, you know, if you want to build a good one, right, you're going to do the stuff that we always do, right, you're going to add your logging, your authorization, your authorization, like, you know, these things that you just typically build in there, right, it takes effort to actually do and do the right way. So there's an ability within with GRPC to add in interceptors. And the interceptors are where we're going to augment and enhance the CSI drivers. So as we list out some of the things that we're thinking about to have on the roadmap slide, like a lot of those things are going to come through fruition through just ejecting interceptors, which is that, you know, you, you create a native CSI driver that is focused on, you know, implementing your core features of your source platform for like your CRUD operations and your, your orchestration operations, and then these interceptors just come in extensively to add value on top. So basically it's a way to augment operations, service operations to extra fields or commands. Yeah, I mean, the simple way is just to say, hey, like logging, how can we make logging standard across the drivers? Well, one way to do that is just to add an interceptor for logging. And all of a sudden you can add context IDs and, and things that are valuable for tracing operations. So it's just a, in GRPC it's a great way that you can just easily bring in these, these core things that make your implementation better. And Clint, just to test stuff for a second. I realized that there's a huge effort in making CSI for fairly well scoped so that we can ship it. But do you see, for example, of interceptors being a feature of CSI in the future? If it's valuable to the ecosystem, then it's valuable to CSI. Yeah, exactly. That may be, I think that's how you implement some of this extra value. And I think logging, for example, or something that may be contributed to CSI. We've got this go CSI package. And that's where some of these interceptors live. And I think that that kind of thing would be valuable to everybody. And it's something the core to just like creating a great plugin aside from what your perspective is. And that's an example of something that we'd say that we would introduce to the CSI project and see what the response is. And so there's other things that we do in interceptors as well, which may not be the same thing. So maybe it's going to be authorization, authentication, like there's kind of a list of other things that we can do in a simpler way that may not be valid things to submit to the CSI project. But the middleware and the interceptors is just how we do this without changing each of the CSI drivers with keeping those CSI drivers very native to the storage platforms. Okay. I guess my concern, and maybe it's unfounded at this point, but there's value in having one point of interop where people build things that already agrees to some layer that they build drivers to. And I worry if we have two levels of it that the message gets diluted. I'm really there. I think that I just think that over time, like we're going to need somewhere to be very innovative and somewhere to test out whether people care about some of these things, you know, before they make it down into a slack. So like CSI labs, this restaurant is like CSI labs. It can be. Yeah, I mean, I think that's one way that you could think about it. Yeah, the way that I like to think about it is that CSI ultimately the things that it dictates are the specification, the protocol to interact between a cluster orchestrator and a volume plugin. And those are the only things that it dictates everything else that goes around and how you make that interface exist. So CSI will never dictate it. It may suggest it. It may recommend it, but it'll never dictate it. So while we're starting the project, we're just focusing on what exactly that interface should look like. We haven't we've purposefully avoided defining what the packaging should look like that can differ from CEO to CEO, defining what logging authorization, all these things are going to look like. If you go to the Kubernetes project, you'll see we recommended one way to deploy it on Kubernetes. But we want these how the how to naturally emerge and Rex Ray is is is one one way to do that. Ultimately, if it ends up that there is just a very common standard way that folks are creating CSI volume plugins, then it may make sense to pull that into the CSI project itself. As here is some recommended packaging that you can use but you don't have to. Ultimately, all you have to do is create something that implements the interface. It doesn't matter how, in order to create a compatible CSI driver, you can use this optional tooling if you want to, but you don't have to. Yeah, the one thing I'm tripped up on is if Rex Ray is a CNCF project, does that. Do we start thinking about common sets of packaging or common interface for packaging on CSI. That that's the part I completely understand what you said about not man, you know all the critical pieces in core CSI. Yeah, but I think this is in the context of Rex Ray as a CNCF project. Yeah, that's right. Hi, actually this is Chakri. So yeah, after developing some drivers right like I was part of the Kubernetes CSI effort, I realized that a tooling like this will really help because there's a lot of duplication. And if every vendor has to go ahead and do all the stuff there's a lot of common code which can be avoided. And some storage systems might want to implement only a few of the APIs and maybe can leverage something like this that will really help them. Cool. So I guess the question for you Clint is would you be interested in donating parts of Rex Ray at some point in the future to the CSI project? Yeah, yeah, absolutely. You know, we're up with that discussion. And I consider like go CSI kind of a part of the Rex Ray project. And that's kind of a great example of something that we've tried to keep your religious and specific to the CSI influence or interface itself. And it's that kind of thing or parts of it that we'd be very interested in contributing. Cool. I feel like a case of batteries included but not required kind of, you know, it'll be great if CSI had a didn't mandate a packaging story but had part of the CSI project had a couple of, you know, suggestions maybe even implementations of packaging that could help people get started. Yep. Totally agree. So I kind of intrude this to say, hey, I didn't want to have a whole, you know, we're not having a whole CSI discussion here in it that Rex Ray's largely focused on it and that's what I was talking about. But I encourage you guys to join the face to face and vote on when that face to face is going to be because I think it's this kind of discussion that will carry on to there. Get pretty lively. So good stuff. Let's go to the next slide. Great. Okay. Alright, so the roadmap. So what are we, what are we thinking about and this is where one we want to get it contributed to a foundation. I think that one of the challenges we've had with the project over the last couple of years is, is really the collaboration from other storage companies. And it's unfortunate, but like in the storage ecosystem, you know, it tends to be very competitive. And, you know, the project itself has been under EMC code and code LFC and the code team. And, you know, we'd love to get it to a foundation, because I think that once we get there, I think we'll, we'll have more collaborators working on it together. So that's one of our goals for the year just to increase collaboration and get more folks involved in it. And as part of those zero 12 and one dot X releases coming up, you know, number one and the biggest thing is, is being 100% CSI, zero dot one compatible. The current registry release is the pre zero one tag. So there's some small changes to get that up to date. Once that happens, like all through 13 or so rest race drivers are all CSI compatible right away. So that's, that's number one. The, you know, we'll continue to provide the interop capability with all those CSI drivers and the existing ones through Docker, a Cloud Foundry, the Kubernetes flux, et cetera, and Mesos. So that's, that's going to be in there still. But when we get to the enterprise user experience, I'm actually really interested to hear, you know, feedback, you know, separately, maybe not here on a call, but separately, if you guys are interested in getting engaged. You know, what is it that enterprises of people who are actually going to use this stuff? Like, what do they care about? And what do we need to do to make this a great user experience? The first thing that we thought about was the deployment. So for any of the CEOs, we did a simple and consistent deployment management of these, these plugins. The second thing's about security and credential integration. So if we're going to be, you know, configuring these plugins and we want to store our credentials or we're going to be asking for sensitive credentials. We've got to use something else to actually store those, whether the CEO provides it through like a future CSI, you know, API that we have, who knows. But I think for right now, we just need to make sure we have external integration through something like Vault to store these sensitive credentials. We need to make sure that we're tracing and logging and providing metric integration so that we can actually record all the events and provide all the visibility to what's going on with these plugins. So those are, those are three key things that we want to make sure we accomplish for the user experience. And I think it's, it's arguable that like those are three key things that everybody should do with their plugins, right, but they're not easy to do. So if we can provide that all through Rex Ray, I think there's tons of value in just packaging your driver within Rex. You know, another thing that, that some run into with their CSI deployments or CSI plugins is scale. We've got centralized API throttling on the, on the docket. And so what does that mean? So if you've got a, and this is actually really difficult to pull off with CSI. If you've got, if you've got AWS, for example, and you've got, you know, 10 hosts or maybe like 10 different clusters, you're gonna have a bunch of CSI plugins running. If all of those plugins are independently trying to use the AWS API, it's going to saturate it very, very quickly. If you've got one Kubernetes cluster today, like centrally that it manages it, but what if you've got like 10 different Kubernetes clusters? Like how can you centrally like throttle that stuff? So one of the things that we're adding to Rex Ray is at CD integration so that it can increase the item potency domain beyond just a single implementation. And you can truly actually, you know, lock and limit the API calls to that single AWS endpoint. So that's kind of a cool thing that we'll be adding that can provide value for any, any storage platform. The next thing here is extended volume functionality. So data, data at rest encryption. I mentioned that earlier that if you've got any platform that's providing block storage, we can add a middleware step where we add a an encryption shadow device 3dm crypt and then any, any data at rest is encrypted. Pretty cool. The next thing is going to be the following the CSI updates. So whether it's volume property updates like size and IOPS, or whether it's snapshots and replication, those are both going to be introduced as they get defined in CSI. Next thing is availability. So there is with certain platforms, the need to extend availability beyond where it is today. For example, in some of the, I mean in CSI today, there isn't a part of it that defines what to do if a volume is locked, like if a volume cannot be moved while a instance is powered on. It's a manual operation to actually make that happen most likely. And so I think there's probably some work and early implementation that can happen within something like Rex to handle the situation where volumes are locked and you have to do forceful detaches. And eventually, like if that is successful, we can figure out how to move that into CSI. The availability is another thing in terms of enterprises, we just need to make sure is fully tested and, you know, support at a global level across the volume plugins. And then extensibility wise, like we're planning on integrating it to, you know, the portfolio of CNCF projects. So whether that's, you know, falls or at CD, which I mentioned already, or whether it's a growth and tracing fluid to your Prometheus, like, you know, those are all things that I think are valid to get these plugins in a standard way like hooked up to to provide information. All right, next slide. All right, so the, the history of releases with Rex Ray, we've had 78 so pretty consistently over the past couple years with Rex. So that's that top chart on the right. In terms of activity, we've had a pretty steady increase in activity activity at the repo. So we're at, I think about 1000 stars right now. We've had 150,000 or so been trade downloads of Rex Ray. And, you know, the, that was actually the past year. And then the Docker hub downloads were over 50,000. All right, next slide. The contributors like this is where we're trying to increase it. This is the contributors over time on the left side. We've had 42 individual contributors of code. And then 264 collaborators collaborators in the project with GitHub issues and other things. And then on the right side, you can see that steady growth again of the stars. All right, next slide. All right, so, so why, why Rex Ray, we want to help get the CSI stable. And, you know, in terms of getting at the foundation, I think that it's the right thing to do to help increase the collaboration on the project. You know, having a ecosystem where we have many CSI implementations is going to be a great thing. So Rex Ray is going to be focused on, on being one of those implementations. And I think with our experience as a team and working this area, we're pretty laser focused on, you know, what we're hearing from customers and what we think is going to help the CSI community move forward and mature. So having in the foundation and having collaborators in the project is going to be a great thing for us. Any, any comments or thoughts on that? I mean, that's, that's pretty much the, the Rex Ray pitch that we're thinking about as we talk to the CSI about it. Sir, first, first Clint, thanks for presenting. It's a great presentation and it fostered a bunch of great discussions. So yeah, I'd love to, love to get comments from folks as we think about presenting this to the TOC. Actually, I had a quick question. So in my, this is Matt from, from data era, I was wondering in for Rex Ray, is there a focus on, you seem to be providing this, this functionality on top of what CSI is defined as. I want to, from the perspective of a vendor, are, is there a focus on making sure that there are vendor pass-throughs? Like say if a vendor already has hardware level encryption built in and they don't necessarily need to use the de-encrypt option that you have in Rex Ray, is there a way to bypass that and to use the, the vendors option or the vendors capabilities? Yeah, if it's built into your, your driver, I mean that's, that's kind of the nice thing about architecturally where Rex Ray's gone now is that Rex Ray just packages up native CSI drivers. So whether your CSI driver just runs standalone and just is a CSI driver and does, you know, the minimal functionality you're expected to, that's great. But it is, you could, you would implement those core encrypted features inside of your CSI driver and then Rex Ray would be able to use those. And, you know, it's an interesting parameter to say, like do not allow extra middle, do not allow like Rex Ray's encryption capability, but do allow all the, all the other things that Rex Ray provides. I don't think that we've thought from that perspective yet. But it would definitely be something that we would consider and I think something that's important so you don't have people duplicating that level of encryption. Yeah, awesome. That's pretty much what I'm looking for. A lot of, you know, what we do as a vendor is trying to implement the fastest way of doing these things, you know, because we have access to the hardware layer. And so, you know, duplicating that, that sort of effort at the software layer is seems excessive. Well, it could be, but you're also talking about like, I mean, if we get into storage stuff here, you've got like, you may be, well, I guess it depends on your client implementation. Like if you have a client that's doing the, the in-flight encryption of that as it gets stored and encrypted, like, you know, at the device or what have you, I think you're okay. But DM Crip would provide a layer of encryption for someone who doesn't do in-flight encryption but may do at-rest encryption on the platform. So it just, I think it all kind of depends, but I agree with you. Like, you know, that capability to say, no, like we never want to allow, you know, two levels of encryption. Like, we could probably have that as a parameter and so it'd be great feedback to have from you if you're interested in collaborating on that. Okay, thanks. Great. Any other comments out there? I have questions. So are there any, like, adaptation of Rexray other than the EMC code team? Is, I'm sorry. What was the question? The question is about diversity, I think. So are there any adaptation of Rexray or like POCs or tryouts or use cases? Like from other companies other than... Yeah, we've... Yeah, absolutely. I mean, if you go through the GitHub issues, you'll see lots of questions and people involved in the project. I mean, there's 40 plus collaborators to it. So those are all people that haven't actively involved. We've got a splash page that lists, you know, customers and organizations that we've worked with on the project in the past. So there's definitely lots of adoption of it. I think that, you know, in terms of this use case of persistent storage with applications, you know, the primary focus of Rexray has been Docker and Mesos. And it's only been recently with Kubernetes as we've adopted CSI that, hey, now there's this new opportunity with the Kubernetes ecosystem to be relevant. So I think that the adoption and interest in this type of project is only going to increase because of CSI's CSI. And because of the future move as Kubernetes loose towards CSI and less focus on the entry drivers. Any other questions? Okay, great. So let's do this. Let's use now the last few minutes just to talk about future topics. So folks have anything that they'd like to be presented in the future, discussions they'd like to have. I'd love to hear them and we can capture them here and then Clint and I can reach out to folks and try to get some stuff scheduled. So I see we have Minio up. That's scheduled for next week too. So we'll have a 20 or 30 minute chunk that they're confirmed. So we're looking for for next week and beyond. Hi, this is Kiran from Open EPS. Not for the next week, but the week after that we would like to present as well. Okay, great. It'll actually be the meeting after so it'll be four weeks from now. That's right. Yeah, thanks. Okay. Okay, if any folks are interested in presenting, please reach out to either Clint or myself. And we will get some stuff scheduled. Otherwise, I think we can break early. Thanks again Clint for presenting on Rex Ray and see you guys all next time. All right, thank you guys.