 Good morning. Good afternoon. Good evening wherever you're handling from welcome to another episode of Cloud Tech Thursdays here on OpenShift TV I am Chris Short, executive producer of OpenShift TV. Yes, I say it so fast. I can't even say it anymore I'm joined by a wonderful team of folks here. I would like to hand it off to Amy Marich to tell us what we're talking about today Hi, everyone. I'm Amy Marich I'm the open-stack community person for Red Hat and I'm gonna introduce some of my other cloud-related community cohorts We have Josh Berkus, who is the community person for Red Hat for Kubernetes and Mike Perez, who is the community architect for the SEF community Mike, do you want to go ahead and introduce our guest? Yeah, so we got a blank gardener here who is the principal software engineer on the Red Hat OpenShift storage team and is a upstream brook maintainer and container storage engineer specifically to get that right And I'm excited here to talk about Kubernetes CSI in particular, which stands for Container Storage Interface It's been generally available status in v1.13 and has since evolved to support a large number of vendors and storage formats And the Container Storage Interface only supports block and file level storage at that point However, the Kubernetes community is burning forward an initiative called the Container Object Storage Interface or as we call it COSY to focus on infrastructure deployment of object storage solutions in Kubernetes environments So, Blaine Gardner, would you like to take it away? Yeah, I'm excited to talk about COSY It is very analogous to CSI It's a common logic storage API for Kubernetes I kind of wanted to start from the point of what is my investment in this? Why do I care about this? I work with Rook and Ceph, I'm a maintainer upstream for the Rook project dealing with Ceph Ceph itself is a distributed software-defined storage solution It's been around for nearly a decade or maybe more now and provides three types of storage block and file which are provided or are part of the CSI initiative and it also provides object storage And then Rook is the management plan for Ceph in Kubernetes and allows Ceph storage to be mounted into your whatever application odds you have for yourself And yeah, being Rook object storage is one of our big three features The benefits of object storage that we see for users are that it provides unstructured data There's no real tiering system But it does also provide a lot of rich metadata, which makes for easier analytics So I tend to think of what to do around that I want all files bigger than this or smaller than this or whatever kind of advanced queries you have And because it's a flat system it can scale much larger than file system storage And some of the use cases that I like for object storage are things like all the video and audio that you have in Netflix or for gene sequencing files from genomics research or a little more mundane like for system backups And something that we do in the Rook project is we use it to house our home chart repository So it's great for packages and things as well And the kind of problem statement here is how object storage works in Kubernetes today So whatever you know quote unquote big cloud provider you have generally the workflow is that you create an object or a bucket or a blob or whatever You call it in that Your big your big provider calls it And then you copy those details manually and you put that in the definition for your application like a deployment There are some projects to make this a little easier which We use fuse which is a file system in user space rather to mount the bucket and then be able to access that But it is still a pretty manual process in Rook I think the process is a little better but it is specific to just how Rook handles it We have custom resources for creating an object store based on set and then also for creating users for the object store And when you do that Rook creates a secret that you can then go and query programmatically that authentication details That you can just type directly into your user application But cozy I would say the driving mission statement is to reduce the manual steps needed to create and use object storage in Kubernetes And the vision being What if you could claim object storage without specifying exactly what that is I just want some object storage And I can claim it with a standard Kubernetes manifest that claim will be fulfilled automatically and then I can access that claim programmatically The Diving a little deeper into the benefits of cozy It's a Kubernetes enhancement proposal that's currently an alpha but it is like the goal on the plan is that it actually becomes a part of Kubernetes similar to container storage interface CSI today So it will be built into Kubernetes And it takes inspiration from past success in Kubernetes Notably persistent volumes, persistent claims, storage classes, and then I have mentioned the CSI interfaces And the, yeah, I think the chief benefit is that it's more flexible it's easier to use and for, you know, coming from the Rook side it it's easier to implement Also then And writing something ourselves And another big benefit I see is supporting multi cloud use cases Which helps reduce vendor lock in also And the kind of technicals for how cozy achieves this is it provides resources to users and administrators to that they can create to actually get object storage And from from past success, there are analogs here, the bucket class kind of matches up to what a storage class is and a bucket request matches up to a persistent volume claim The bucket matches up to a persistent volume so an administrator can set up a bucket class a user can request a bucket from that class and then we'll get a bucket that is created Object storage is a little more complicated than block or file it can't just be directly mounted to a user application but you actually need to get credentials for it and so we also have Success classes and access requests that are created Yeah So you're talking about the storage classes and the persistent volumes and then you just mentioned block storage as well so are these persistent volumes block storage or totally related to the object storage I would say they're just an existing analog They currently provides storage classes and persistent volume claims for block and file storage but doesn't really provide any methods for for object storage So that's kind of like one one side of things and then there is the because he side of things which aims to provide an analog that fits into the object storage paradigm where you do have storage you can claim but it's I mean it's just not been around as long as not as well defined and there's a lot more variability and so I think for for CSI it focuses a little bit one traditionally what's been available with this block and file storage. So these are when I'm talking about storage classes or persistent volume claims that is those are existing utilities existing things that users and communities can use that provide I think a good basis for understanding what cozy is because it is pretty Great and cozy would presumably have sort of a driver structure just like CSI hits yeah Yeah, CSI also has helper side cars that are used to help provision and mount storage there are also helpers for expanding a volume and cozy has plans to you know following that design which has been pretty successful also implement side cars and those primarily help vendors that's not something users really care about but it is something that I you know working on this and Rook will care about and then cozy itself also will have a controller or has a controller which manages some of the more vendor neutral behaviors. The interactions between like okay when a user makes a bucket access request. It can check if a bucket access class actually exists for that and then not have to, you know, make the broken in our case, actually check to see that everything is okay. So does this with local storage at all out of curiosity. I think the relation to local storage here coming from, like a work perspective is that you know if you have some number of Kubernetes nodes and have local storage on all of those. Rook can consume that local storage it can pull it together and it can create an object store on top of it. And that is sort of the, the relation it is. Yeah local storage being local to a node is is great if your application never moves or if your node never fails. But I was thinking more like distributed database that kind of thing. Yeah, I mean, I think stuff being distributed can actually improve the performance of even even slow disks have the right setup and be true for object storage also. I have more questions but I'm, I'm going to hold on to them because you might answer them presentation. Sure, sure, sure. I'll continue on then I think I got all the content slide. I'm going to talk a little bit about how Rook uses cozy currently. So there is what what I have often kind of publicly explained as a proof of concept predecessor to cozy which exists, which is a live bucket provisioner API, which work uses to provide object bucket claims. And there's two claim object storage which is fulfilled automatically, which we implement in Rook but it's not, this is not specific to to Rook this is obesity can be implemented by other other operators as well. But this is a nice like starting point this is what. And this is the kind of concept for cozy, and it's something we, we see our users asking questions about it. I think that's a really good sign for cozy. And that's why we're engaged in the cozy community. And that's why we're, we're starting somewhere to integrate with the alpha version of cozy and hopefully be. Yeah, be one of the first people on this train. To finish up, I want to kind of think some people who have helped me along my journey being a Chris, John, Sid, Jeff and Jeff, who are both upstream and then also my workers here at Red Hat. I think they deserve a shout out. Yeah, I think, I think we can open it up to the full Q&A. Thanks, Boy. So, we talked about specifically with the container storage interface and that being focused on file level and block level storage. I think it might be helpful for the audience to take a step back and to understand the use cases and how these are actually being used by applications so that people understand the different types of storage because you hear file, block, and you might not know the difference in when to use one or the other. Sometimes it is actually applicable to use both, you know, in different ways. But maybe you want to talk about what, like coming from OpenStack specifically, you know, we've used a lot of virtual machines kind of deployments. We've seen block, block storage to clone images of virtual machines to which then those would be the root file systems to those virtual machines. And then, in theory, right, with it being distributed throughout stuff. Your virtual machine is fine. And in terms of like those clones, you're saving a lot of data. You're saving a lot from data in terms of having to not have to rewrite those images, but instead do writes on top of those images. So it provides, you know, storage efficiency with that. However, maybe with object level storage, we could talk about, you know, in terms of an application using BOTO or something like that and how that all hooks into where the operators. I guess actually the operator deploying a pool, an object storage pool through Rook, and then how an application would actually utilize that. Okay, I think there, there's a lot there to talk about. I guess first, yeah, talking about like block and file. I do also tend to think about block storage like, like virtual machines that is kind of like you want a virtual machine, you, you know, you have a block device that you install it on just like any sort of raw hard drive that you install, you know, Linux on in a server. I also have seen, like, a lot of like standard databases like SQL databases will just take a block device to get installed on. So I think that that is another, another way that I've seen block storage used in a, or no decent environment that is not just to provide the ends. For file storage, I think that comes up a lot with it's kind of more, more traditional like enterprise. I want to share files between these different organizations and is a pretty easy go to it's and why I just want to put the files and I kind of like have them already written structure. Things fit, fit pretty well there. Because of, I guess we're talking very tech, all the Linux I nodes and things, there is a system metadata that's required for for file systems to locate things. And in the directory tree. Yeah, from, from open stack, I think, if I remember correctly, the, the system that kind of hosts the, the like images that you would use to install a PM on a block device is generally object storage. It's like Manila in open stack. I glance. Yeah, glance. Sorry. Oh, I think Manila was file storage. That's right. Yes. It's been a while. Yeah, so, so open second is a pretty good example of where block file and object are all used and then, you know, the, the glance storage is I think pretty similar to my example of kind of a package repository since it's just sort of a repository. Here's this image you can use. Since you brought up databases, I can actually give you a good example of a common practice from the database world, which is that for primary storage for the actual, you know, live data databases will block storage. If they're using, you know, network storage to the local block storage or network block storage. But I, we, if it's available, we actually like to put backups in object storage, because we think of each backup as a single unit. It's we're looking for maximum availability for it and object storage works much better for that. I really love this because I get to learn stuff too. Following up and actually Mike's question I have a specific question about, you know, how this is going to work with. I was going to work with applications right because right now, if I'm, if I'm running a database and I'm sending my backups to object storage, and I'm doing it in Python. Then I'd use Boto if I'm on AWS. And I'd use the Swift library if I'm an open sack. And presumably I would use something else for Google Cloud, but I don't actually know that much about Google Cloud storage. So, is it the idea that we would eventually get specifically cozy libraries in the major programming languages, or are we aiming for compatibility with one of these specific object storage things. And we're just going to reuse those libraries. I think the idea is is reuse. I haven't used Boto so I don't know exactly what it provides. But I can say that for for AWS it provides an S3 interface. And work also, well, Seth really is the, the thing that provides an S3 compliant interface. So if you have an application that sends backup data to AWS, it's already using S3. And so you can just sort of point it to Seth and it should just operate the same way. There are sometimes features to Amazon implement so that before Seth might get them, but largely like the base is still, is still there. It is cozy just makes it easier to in a in a Kubernetes native way, just create a manifest that says I want, I want some object storage. I need like 30 gigabytes of it or I need three terabytes of it. Or, you know, maybe I need a petabyte of it if you know something really huge and then that will go and it will look at the bucket class and see okay this ends up being provisioned by Rook or this ends up being provisioned by Amazon. And then the operator for Rook or the operator for Amazon will go it will create the storage. That operator will then like figure out what the actual credential details are, and then populate a secret, and then the user can actually sort of look at the, the information from what they requested and see okay the status says that it's provisions now. And it says that here's the, like, access class that I can look at to get my access credentials and then just mount those into a pod. So it sounds like cozy will be Kubernetes agnostic as to whether it's Amazon's Kubernetes open shift or whatever Kubernetes version you're using to connect to the database, or in this case, the storage, I would say versus database. Yeah, I would say that's true. It's just a matter of whether or not there is an implementation for the storage that exists. So what it's worth object storage is the example of why the, the line between what is a file system and what is a database is not only extremely thin but non existent at this point. It's really, it's really whether it's a file system or a database it really depends on on which project or vendor it came from. Yeah. Blaine it are there any more slides or anything else you're going to share because I was going to ask you to close your screen sharing if not. Oh, no, this is the last slide. I guess we could have ended with like Q&A. No, that's fine. We're all talking here I just realized that we're still being shared. Yeah, if I wanted to add something on with what Blaine was mentioning, and specifically with stuff supporting S3 as S3 is API. So we have in GitHub, under the stuff GitHub org, you could look for the S3 test repository. And what's neat with that library is that that is the whole test suite that we are using to ensure stuff is actually compatible. We actually found that other libraries use it as well to make sure that they're compatible with S3. So there's been a lot of investment done by the stuff community and terms of making sure that this works. And what's great is that with the work development as Blaine was mentioning that, you know, focused around with the S3 API, I mean you can ensure that stuff works with it than the S3 API is going to work with it too. So you can provision both in, you can, you can, yeah, provision both in Amazon's EC2 if you wanted to do your own object inside of there and not use S3, but yeah. Well, I mean part of it, I mean, I think I'm still wondering about the whole, you know, client side language thing because like I've worked with BOTO extensively. And the problem is that BOTO is designed to interface with AWS so it handles all of the authorizations, for example, as part of the class. And obviously we're actually, when it's accessing COSY, we're not going to want to have it do that. Or we're going to want to have it do, because presumably, so the authorization is actually being handled when you instantiate the COSY connection, yes? Yeah, yeah. So the, yeah. Wait, actually can I speak on this one? Yes, please, please. Yeah, I think there might be confusion. So with BOTO we're thinking like as the client, the user that is actually going to be reading and writing the data. What we're actually doing with COSY is this is on more on the control plane side. So as the operator who's actually making that bucket, that object pool and then buckets available to users that then would use it on BOTO. So BOTO then is connecting to, as authorizing through stuff itself, through monitors at that point, through, well, not through monitors, but actually your key and secret, as I understand, your access key and secret. So, and then you, as the operator, you're able to give access to those buckets within that pool, you know, to a set of users. And then from the BOTO interface, you are, you know, doing your, your puts and your gets to those buckets to interface with them from your application. So you would send your authentication credentials through COSY, which would then translate it and send it to BOTO? You know, COSY is not in the data path at all. So you have your application and then you have Ceph over here. So, and Ceph, your application is talking directly to Ceph, reading and writing. COSY is not in the picture. As the operator though, who has to make, you know, the servers do something and have the buckets available so that the application can actually read and write data to them. So the operator, then they're using something like, you know, kubectl, for example, to use Rok to deploy using the COSY interface so that those buckets actually exist on in the Ceph data storage pool. Okay. So COSY's role is effectively wholly administrative. Yeah. Yeah. You know, so, so in some ways, giving it, making it analogous to CSI is a little bit deceptive. Because CSI actually attaches, you know, a block or a file system to the individual containers, which is not something we're going to be doing with COSY. Yeah, I'll let you speak on that. Yeah, I think that is kind of where the analogy breaks down. So CSI is also not in the data path. But it does also help to mount block or file storage automatically into containers. And that's not quite as possible with object storage because it is, you know, object storage applications have to say here, my author, like my authentication credentials with my username and my key. And I need to, you know, authenticate to the DS3 interface, whether that is backed by Amazon or whether that's backed by Ceph. Like that is just sort of kind of what the application knows. And I think a lot of times that is, to my understanding, mounted in different ways, it can be an environment variable that says here my credentials or it can be like a file that the application reads also. And there's also, I mean, so in CSI, you know, we have, you know, where we make volume claims, for example. And, but we still have sort of that concept as I understand, you know, with even the existing container storage interface that we're using today of making certain claims. What exactly are those resources. I'm pulling up the source code now to remember, but you might know off the top of your head. The, do you mean like the, like persistent volume claims and such? Is that what you're asking about? Yeah. So there's like the bucket claims that I'm seeing in here, for example. On those, like when you're, as I understand, like those bucket claims, like that's to actually provision the bucket that the application would use. And you're able to set various additional configs in the YAML, such as like the maximum number of objects that you want to allow as a policy, you can also set the max size for that bucket as well. So in a way, it's like you provision volumes to containers and then they mount them, or they can use an HTTP interface, for example, and write out to buckets instead. Either way, from the staff standpoint, it all goes to the same storage cluster. It's just in sliced out into different pools. And I'm not getting going off in the left field here. I'm really interested in that second path, because that's the new thing with this, right? Because if I'm used to, if I'm writing, if I've got an application that's Python and AWS, then I run my thing. I have BOTO. BOTO creates a connection, an HTTPS connection to some Amazon S3 bucket, which is, you know, purely sort of a network service. Within BOTO, it handles all of the authentication and authorization in order to gain access to that. And then I upload a copy of my backup to that, right? It sounds like that's going to be enabled, but it also sounds like there's kind of a second path around making a bucket claim that's different from that. That's not HTTPS. So what I want to know is how that second path works, because among other things, I've never found the authenticated and authorized for every single request ideal. So if there is a second path, I'd like to know about it. Yeah, I guess like talking about the Amazon example, because we're talking about BOTO, like my understanding is that BOTO also helps you sort of programmatically create object storage in Amazon and then get credentials about how to access that. Well, not just create it, but also, you know, because basically I'm trying to figure out what, did we? Nope. Okay. I'm trying to figure out what the role of the bucket claim is in here, right? Because if you're just doing pure HTTPS authorization, etc., there's no real role for a claim, right? The volume exists, and if you have the correct credentials, then you can access it, or the bucket exists, if you have the correct credentials, you can access it. So what role does the claim play? The claim plays the role of, so I guess if I step back a little bit and I assume that I'm a user of a Kubernetes cluster, there is a like the COSI operator for Amazon is running. I can write a manifest that says, okay, I need object storage, and so I'm going to make an object bucket claim resource that says I want object storage, and then COSI will see that, and the Amazon operator will actually create that storage and provision it and make sure that a bucket exists, and that a user exists for that bucket. And then, so the details around actually creating an object storage bucket and the user to access that bucket are done at the Kubernetes level, so that you can write an application that says, okay, I just, you know, I'm connected to some object storage that uses S3, and so I'll use part of the Botope library maybe to, you know, read an environment variable that says here's my user and another one that says it's my access token, and then I will just access whatever bucket from there I will upload my backups or I will, you know, upload whatever files I have. The one of the things that COSI also allows is applications that are ephemeral, so if an application wants object storage but it doesn't need it for a long time, then COSI also handles the create this bucket when the application is done right. And so it abstracts away the create this bucket, and make a user for this bucket, and then give me access details like that is all handled. And in that way, I think it should be possible to write the same application that uses either Rook's, Rook and Zeph's S3 interface or Amazon's S3 interface in a way that the application actually doesn't need to know that there's a difference. Okay, I mean, obviously that's a good thing because, for example, within the Python world, there's a lot of higher level applications that are written in that embedBoto, and so having that still work is terrific. I do actually kind of wonder about sort of limiting ourselves to the S3 workflow just because one of the characteristics of S3 is it's slow, right? I mean, deliberately so, right? It's designed to be slow, but it highly available. But that's not a characteristic of object storage necessarily, right? That's a characteristic of S3, right? You can construct it. If I have a local Rook cluster on my local SSD storage across the nodes in my OpenShift cluster, whatever, that's actually fast. It's fast to access, and it's fast to store things. And I'm wondering if, by modeling this around the S3 interface, if we're disabling use cases that might benefit from having fast object storage? I think that's a good question. Cozy also is focused on being sort of a vendor agnostic, and I would consider S3 kind of part of a vendor. Like, Ceph provides an S3 interface, but you also could just talk to the RATOS gateway directly, so there could be a sort of driver in place for that in the Cozy upstream meetings. You know, we talk about all the big cloud providers. Azure has their own non-S3 interface. I think they call it blob storage, and they have their own access methods. And so, just like there might be an Amazon and S3 driver of sorts with Cozy, there also would be an Azure one or one for GCP. I think they might also use an S3-compatible interface, but it could be wrong about that. Yeah. I mean, all of this is also disappointing to me because, of course, the OpenStack folks built Swift to be an open standard. But everybody is focused on S3-compatible. Yeah, it's just because I'm looking at this, and this is just like a really small example. Pardon me for being very narrow here, but all of my personal hands-on experience with object storage is through my time as a database engineer, right? Right. And so, like to give you an example of sort of fast versus slow use cases, right, is in the Postgres world, people use object storage extensively for backups, right? Snapshots and backups, right? Because those benefit from high availability, you know, high throughput, but, you know, can be slow to access. But like one thing that you don't do in the Postgres world if you're just looking at S3 is, unless your database has a very low quantity of changes, you wouldn't put your replication logs into S3 because it's too slow. Right. And on a database that's really busy, you might be producing, you know, 16 replication logs a second, which doesn't work with the sort of S3 workflow where it might take several seconds to upload an individual one. But if you actually had fast object storage available, then suddenly that becomes a viable option and you could do something like, say, broadcast replication where, you know, one database writer puts this replication logs into object storage and then multiple readers are able to pick up individual copies of those as objects. So I'm just wondering, you know, we look at, maybe the medium term, looking at, is there a way that we can sort of make it possible to do fast object storage through COSI? Or is it already possible? I think it should be already possible. I guess I have a bit of a follow-up question, which is, when you say S3 is slow, are you talking sort of specifically about Amazon's infrastructure? Is there an implementation of S3? Or is it the S3, like, API interface itself? Yeah, both. Okay. Both, right. Both, there's the whole network implementation that is optimized for high throughput, high availability, slow access, right. You've got to make your cloud trade-offs. That's the set of trade-offs they decided to make it, and it's super useful, right, for one set of use cases. But then the second thing is, every request I make, every object I write, goes through the entire authentication and authorization path via the API. And that authentication and authorization takes a significant amount of time when you're comparing it to the amount of time that's required for a local file, right? Right. That would be more Seth's problem, not Cozy, because Cozy is still on the control plane side. Yeah. Yeah. Yeah. I think I can provide a little bit of an answer here. So let's, yeah, let's say that Seth is able to provide the S3 interface faster than Amazon is able to provide it. You could in the same, in the same sort of multi-cloud cluster, or even if a Kubernetes cluster were just running in Amazon, you could create a bucket class that is, you know, standard S3, and you could create another bucket class that is, you know, called like Fast S3. And if I, as a user, am like, okay, I have a database, but it's very active, and I want Fast S3 backups, then I could create a bucket request for that Fast S3 class, and then get that faster storage versus if I'm like, well, this database just, you know, takes in information occasionally, so it can be a slower process. So that is how, if you had both a Fast and a Slow S3, you could have both that exist, and you could even have the same application just with different usage patterns. So you would just say this specification for the application, which is Fast S3 and this one does not. Is that an option on S3 to have faster storage performance with S3? Or you just, wouldn't that just be a question, wouldn't that just be a question of how you configure the physical storage and network? Yeah, yeah. Well, so that's what I'm saying, like, that's a great thing to be, you know, if you're deploying your own sort of object storage cluster, you know, something like, Seth, you can have the idea of where you have different pools dedicated to object storage. And those pools, you could have one that specifically that has, say, fast SSDs, if you have found that that is more performant with a RITOS gateway sort of setup. And you can have another pool that is just spinning disks. And then it then matters on where exactly you're deploying those buckets. So as the operator, I'm going to use QCTL to, you know, change the state of with CRDs as I understand. So changing the state as here is a bucket that I'm going to go ahead and provision by claiming here's a set of attributes. Here are the users that have access to it. Here is the policies, the quotas that should be set on it. Go ahead and provision it and then work operator is going to go ahead and kick things off with the orchestrator API, as I understand. And from that standpoint, concept, you know, you have your physical, you know, you have your physical racks, one is a fast with SSDs, one with spinning disks. And for the system standpoint, though, you set it up where they're each their own pools, and you provision those buckets on the solid state drives. And then from there, the access key and the secret that you give to the user, they then plug that into their application. However, they're interfacing with S3 since, you know, stuff is both Swift and S3 compatible. So here you don't have to even change anything in your applications so that it can begin doing read and writes. And from there it authenticates with the access key and secret key. And as I mentioned, like Cozy's not in the picture at all at this point, it's just between the application and stuff directly. It sounds like if I'm looking for faster object access than the S3 API provides for, then I do need to rewrite the application to use libratos instead of using S3 libraries. Why is your application accessing the control plane, though, that's No, no, no, it's not accessing it's accessing the object storage. Like, like, here's the core problem with the S3 API in terms of right speak, right. I'm producing 16 objects a second. For each one of those objects, I need to authorize and authenticate before I can write it. In terms of the speed of rights, that's a lot of time. Okay, well, I mean, I could speak on the Ritos gateway standpoint, though, it's one, it's one authentication and then from there, you are a single object because you know you're not going to upload an entire like in one chunk a gig. So it breaks it up into chunks and uses multi part uploading to which then there's the authentication is only done that one time because at that point you have a token and you're using that token for each chunk that you are uploading to the bucket. So from the Ritos gateway standpoint, I mean, we use the S3 API just for compatibility. You know, I don't think that effects are performance because it's just how you tell it what to do. You know, in itself, different in terms of its implementation of whatever S3 is using underneath. Okay, but so getting back to cozy. So if I've created this bucket and the bucket access class and authorization via cozy, I can access that with Rados also if that's something client application speaks. I think the answer is, is yes the application to have to have a little bit of like leaky information in that case to know that the S3 access credentials it was provided will work with via Rados. Alternatively, there's definitely room for us to explore whether like providing a Rados plugin to cozy is also something that makes sense. So even if it is ultimately backed by the same object store, being able to have two different interfaces to it is something I think that would be possible. We haven't really planned that far ahead. I think most people are really interested in S3. That's our, that's been our focus, but it is. Yeah, I mean this has been an interesting conversation for me because I'm like, oh yeah, there are there are actually other avenues here. Yeah, the, well, so that brings up a sort of a follow up question which is that if I want to get more involved in discussing this back and use cases that go beyond S3, where do I show up? Yeah, cozy has cozy has several different repositories. I think there are some links that should be shared with you all. It has a controller and an API. I think there's another, another interface as well. Honestly, the best places to probably go are the Kubernetes Slack channel and then look for the, I think it's just called Codesy, but to look for the like container object storage interface. Are there any meetings that someone could join? Yeah, the Kubernetes community calendar also has on Mondays and Thursdays around sometime around like 11 or noon mountain time. So there's actually a meeting going on right now for, for cozy while we're talking. But yeah, on the Kubernetes community calendar there's the information. I think the meeting on Monday is a little strange. It will request a password, which is just like 777777 to get in, but the community is very, very welcoming and there's always a lot of discussion about what, you know, what do users need from this? What do we need for alpha? How do we envision this kind of sort of evolving in the future? I would say the Slack channel and the upstream meetings that you can make those are, are really the best place. Or if you're interested in Rook, we also have the Slack channel and we have a Rook community meeting that is every other week. I think that's linked from the github.com slash Rook slash Rook, read me. That's displayed on github. Okay, we've got, we've had a lot of comments in the channel, but not, not anything that amounts to a question. Drop these links again for the repos. Not as one line, preferably. Blaine, was there anything you didn't cover yet that you think is important to mention in our closing minutes? Absolutely. Yeah, I guess shameless plug for Rook. We are dropping the newest 1.6 release later today is the plan unless something goes terribly wrong. But yeah, we have a pretty smooth process at this point. So yeah, that has some, some changes I made to be able to handle much larger Rook stuff clusters as far as like number of OSDs, upgrade them a little faster. We're also able to support multiple file systems, which is has been a beta feature, but it's now a GA feature. And then we have some first class NFS gateway support also. Yeah, those are my release notes for today. Wonderful. Okay, well, I mean, I think we're at time. Yeah, I mean, we've got seven minutes if there's nothing else, I think we can wrap this up if you want. That was awesome. I'm excited about it. Yeah, I am too. Like there's, there's a lot of potential here. A lot of questions. Yeah, we've asked and some of them are more like what we would use it for and how we would do it. And you know, Josh brought up points about databases so that I'm going more in database lines of questions and it's like, no, that's not our discussion. Yeah, yeah. I think that was my fault. Sorry. No, it was also my fault because that is, like I said, that is my entire experience with object storage, right? He is using it for database use cases, right? I mean, I also used it for CDN use cases. But frankly, I feel like the CDN use case is a lot more cut and dried. Yeah. I mean, maybe I never actually was a, you know, I was never in charge of a CDN. If I was actually running the CDN rather than just using it, I might have a different opinion. But the, you know, it's the, hey, what applications are not currently making much use of object storage particularly because our whole conception of object storage is S3. And that sort of workflow that, that is not inherent in object storage. It's just, that's one example of object storage and it made certain specific trade-offs. So, Bec, when I was working on, oh, go ahead. Oh, I just wanted to say I, yeah, for all of you here I'm chatting with, but anyone also watching, if you, if you're like, wow, this, you know, Cozy sounds really exciting. I would like to be able to make use of this. Obviously it's, you know, it's in alpha and things aren't GA yet, but if you're, if you're interested in it, I think it's also really helpful for the upstream community to understand what use cases people are excited to use it for. And that also helps us make a better design because we can understand like really what, what are the things that people care about. So yeah, definitely check in on this live channel or like the stand-up meeting or the, you know, either one of the Cozy meetings if you can make it and just say hi, my name is, you know, playing. I'm, I'm here to, because I'm interested in integrating Cozy with Rook and I want to make sure that, you know, what, you know, we can get what we need out of this and that it's going to meet what our users are. That's sort of how I enter it into it. Awesome. Well, without further ado, yeah, thank you so much for sharing all this with us. Like, my mind is now open to what's going on. So thank you. Yeah. Yeah, this was a great point. Exciting new developments and now when they, the I see, what is it that that, you know, Cozy is going for beta. And the Kubernetes PR stream, I'll know what it is that you're talking about. Yeah, I'm really excited for that moment, particularly. I think that will be when it's, yeah, when we've seen the Alpha Stages. Okay. More lessons learned and really make something great. I honestly think you're going to see higher than usual levels of beta adoption because I'll tell you as somebody right now who writes a lot of glue code to enable all the buckets that I need. On S3, et cetera. That's hackish is all get out. And so Cozy doesn't need to be all that stable to be better. Right. Yeah, I think that's actually really that much more. Yeah. That's what's really surprising to me actually I'm, I'm surprised that, you know, between Google and Amazon and, and, you know, Microsoft with Azure, no one has been like, oh, we can we can make this a little bit better. And then, you know, get the like, the, it's actually, it's been weird. I mean, I mainly have experience with the Amazon flavor. I, the, yeah, and it's been weird to me how slow they have been to enable sort of storage things on their own distro of Kubernetes, because they have all of these storage options. And it's just been slow, which is good because now it means that we actually get a standard version rather than one that's particular to AKS. Yeah, I'm pretty excited about it. Yeah, exciting stuff ahead for sure. So thank you very much for joining us. Thank you for watching today. And our, our next, you know, we're not going to have cloud tech Thursday on the next cycle, two weeks from now, because, because that will be Red Hat Summit summit, which you can get into we're giving away free passes, aka it is free. Yeah. But, but if you're into watching this channel, OpenShift TV is going to be hosting a series of office hours the week of KubeCon Europe. Most of them after the end of the day for Europeans. And we'll touch on a lot of the same topics that you normally see on this channel. So do look at OpenShift TV for that schedule. Yeah, and if you would be so kind as to subscribe to the calendar, I would really appreciate that that way you know what's coming up next, and you can see if it aligns with your schedule. So awesome. Thank you. For any OpenStack folks, PTG is next week, so join us. Registration is also free. And I'll pick up a URL to put in the channel as well. Is that PTG for everybody? For all the SIGs? Yes. Projects? All the projects, and all the projects under the foundation itself. So Kata, Zool, Airship, Starling X, and of course, OpenStack. Okay, so you're going to recruit some more speakers out of that? I already got a CERN. Yep. All right, wonderful. You say, okay, thank you, Amy. All right, folks, we're going to sign off for the day. We appreciate you tuning in. Hope you learned a lot. And we will see you soon. Take care all. Thank you.