 I'm Travis Nielsen. I'm one of the original Rook Maintainers creators. We started this project back in 2016 and it's been a great journey and so glad to be here with you. Yeah, so let each of the panel members introduce themselves as well. I'm Blaine Gardner. I'm also with IBM. I've been a Rook Maintainer for four or five years now. A lot of focus on object store and some multi-stuff and NFS. Yeah, so... Okay, well, I'm Alex Nautros, Founding Engineer of Co-Technology Sync. I've been, I don't know, six, seven years, something around that, I guess nowadays. Maintainer of the project as well. Yeah, thanks. I'm Deepika. Hey, I'm Deepika Upadhyay. I work as a cloud storage engineer in Co-Technology Sync. And I used to work with Redos core engineering team for two or three years. And then I moved to Redos block device, worked there for one year. And now I'm transition to work on Rook itself. Yeah, happy to be here. Okay, let's get rolling here for our agenda today. So as a Rook panel, we're going to do things in a little different format. If you've been to a Rook talk before. Who has been to a Rook talk before? Just out of curiosity. A few people, okay. And maybe you've seen some of our recordings online too. We've done a few of these. But our goal today is really to familiarize you with storage for Kubernetes. What does Rook provide? What does Seth provide? And ultimately, the question I hope each of you asked yourselves is, is Rook a potential storage solution for you? Does it meet your needs? And we'd love to hear any feedback for how it's working for you and what you might think we could add to it. So at the time, at the end of the meeting, we hope to have time for your questions. But the format will be, we'll present, we came with a list of questions. We'll kind of jump around to a number of different topics. But hope all these questions and answers will be interesting. So if you don't get time for your questions, we do have a Rook booth in the Project Pavilion over in that Pavilion area on the far, far side of the conference. And hope to see you there. Okay, first question. So how should someone new to Kubernetes think about storage? Blaine? Yeah, I love starting with this question. I think like a lot of answers, it's two-parted. Kubernetes easily manages distributed applications. And for a user, for the user side, for an application, the storage should just be easy. I want, you know, if my application fails over, if it scales out, then that storage has to follow my pod. This has to be across nodes, across partitions. And that means for administrators, this like distributed aspect is pretty challenging. These are big shoes to fill. If I have external storage, who's going to manage that? If I'm using a cloud provider, am I locking myself in? Myself in? And if I'm going to run my own, can I handle that? So I think this really segues well into the rest of that presentation. Thanks, Blaine, for that introduction. So as we started to explore storage for Kubernetes early in the Kubernetes days, we had some questions that really led us to this project. First of all, as Blaine said, storage, I mean, we all have to worry about it. It's all in our mind. Storage is commonly provided by cloud providers, but if I'm not running in a cloud provider, what about storage in my own data center? And even if I am running in the cloud, there are certain limitations that we'd like to overcome. We'll talk more about that. And then at the end of the day, Kubernetes really doesn't treat storage as a first class citizen. It's external. You have CSI drivers to connect to it. But why not manage storage as any other Kubernetes application? Why shouldn't it be running natively as part of my Kubernetes cluster so I can manage it with the same tools that I'm managing other applications? And then the next question is obviously, well, which storage platform can we trust? Enterprises don't generally go full on board with a new data platform. They want a data platform that's been proven. It's been running in production for some time. Data is sensitive. It's valuable. We have to protect it. So we made the decision, first thing to build on Ceph, and we'll talk more about why we did that and what it provides. So what is Rook, Alexander? Yeah, so Rook, I think most people of you have heard about an operator, maybe I guess. Who knows what an operator is? I still have two slides or so to really quick catch everyone else up, but thanks. Yeah, so what is Rook? The idea in the end is with Ceph being, well, quite the complex system in itself as well. One might see it as even more complex as Kubernetes. Rook is basically there to make it at least easy to run and to some degree maintain. There are simply certain aspects as well to just catch that as well, which you can just have an operator take care of the certain settings in Ceph, the same with Kubernetes, there's certain tweaks you can make to make it more responsive, for example, to node failure. Failure, the same applies to Ceph from Rook as well. But the point is that we have custom objects for which we can easily, for example, upgrade your cluster if you just change the CRD with the new version. Operator sees that, takes care of that. We'll go into that in a second as well. And the integration, for example, the Ceph sees iDriver, Rook sets that up so that he can immediately start using the storage in the cluster. Or even for object storage, you have the currently, what is it called again, object storage claim, right? Object bucket claim. Object bucket claim, exactly, where if you need S3 for an application, it's kind of the same concept as like a persistent volume claim at the moment. These things, that's what Rook takes care of and like sets up in your cluster. Okay. Did you comment about open source? It's open source. I'll patch you 2.0 license. Perfect. Oh, you have your... So an operator, you know, orchestration at the top, I guess the example that I would simply like to use there if you go one further. It's a custom object, you operate it, observes it, detects any changes or any deviation from the desired state. Reconciliation, if anyone wants to kind of Google it, but that isn't too deep into operators yet, attacks it, tries to see what it needs to do, and acts upon that. Just to have the operator kind of, that's what an operator does basically. Yeah. Okay. So Rook makes storage happen in your cluster with its operator. So just a quick note on what Rook's CNCF status is. We started our journey with CNCF early on. CNCF defined their graduation status as, oh, you need to go through this process so that people know that, hey, this is a stable project. It's a project that the community trusts. It's a project where we try to do the right thing for the community. It's not just a product we want to sell you. It's what's right for the community. So we are happy that as of October 2020, we are a graduated project. So thank you for all your support and the community support for making this happen. Lots of people running in production. It's just been a great journey to get through this. So yeah, just wanted to make that quick comment. So let's talk more about the Rook community who's involved in it and why exactly is this our focus. From the start, we wanted to build a project that's open source. It's what the community needs. Apache 2.0 license really gives us that ability to just say, it's yours, deploy it how you need to. And we do have maintainers across four companies currently. So we're us two from IBM and Coore. And a couple we don't have here in this meeting, it's CyboZoo and UpBound. So we really are, we have a steering committee and maintainers and everything set up according to a healthy CNCF project where we want cross company collaboration. We have had over 400 contributors to the GitHub project and according to Docker Hub metrics, 280 million container downloads now. So I guess a few people are using it. Just curious for a quick survey. Are you here to learn about Rook for the first time? A few of you? Okay, a lot of you. All right. How many have experimented with Rook? Okay, a lot. And who's deployed Rook in production? Great. A number of you. Wonderful. Anybody deployed it for longer than three years in production? A few. Okay. Yes. So awesome. Let's get into what is Ceph, why we chose Ceph and where that gets us. Deepika? Yeah. Can you move a slide ahead? So Ceph is, again, an open source project. It's around, I think, 10 plus years now for Ceph to be kind of in open source world, even being used in production clusters. One of the trusted product in open source. It's, it provides a kind of block shared file system, as well as object store, all three at one place. And you can check out for more details. Ceph.io covers the case studies and what is covered. We also have services like dashboard and monitoring everything in place for Ceph. Thanks. And maybe I'll just jump in and add a translation to Kubernetes terms. So block storage, that's where you get your RWO volumes. Read, write once. Shared file system. You get your RWX volumes if you really need to share them. And then object, of course, that S3 endpoint. So thanks. And Deepika, why is that? Yeah. So again, as I said, why somebody should use Ceph. So just moving to that, it provides an all in one unified solution for block file and object storage. And along with that, again, 10 plus years of it being used. Many production clusters and case studies. You can check that out. It was first released in July 2012. And yeah, my favorite one is it's being used in CERN and in large withdrawn collider project as well. And it's working quite efficiently there. We also have talks around it in an event called CephlaCon. It was co-located with KubeCon. So if you are interested in learning more, you can check out Ceph talk from CephlaCon to learn more about it. Thanks, Deepika. Next question. Does Rook support storage providers other than Ceph? So early on in the Rook project, the maintainers, we wanted to do what was right for the community. So we explored this question and put quite a bit of effort into it. We did explore several providers and, you know, Minio and CockroachDB and a number of them and NFS. And we tried to create a common framework where we could bring them all together and share something kind of like you've got the operator SDK for creating operators. Could we create some sort of storage operator SDK? And at the end of the day, what we found was that, well, Ceph today is the only storage provider that we support. The truth of it was that the other ones either had their own community or we just didn't have the community support come to join Rook. And so we've, you know, those projects have all split off. They've got their separate operators and Rook's focus is really on Ceph. And since I grew up with Star Wars, I just have to say the force is strong with Ceph. And Yoda's helping us out here. So, okay. How stable is Rook? Just a few comments on that. We've already said some things. So we're in our third year since the CNCF graduation. It was about five years ago. We declared Rook stable for production. And, you know, we have had several longtime users using it for that long, which is just a great testament to, you know, how Ceph does provide enterprise quality storage. We have many upstream users. I wish we knew how many upstream users. The nature of the upstream is go use it and we just don't know about it. We should do a survey at some point and try and get some more of that feedback. We just haven't for a while. And then also there are products built around Ceph and Rook downstream that we don't have metrics for either. So anyway, it's out there. It's in production. People are using it. I just wanted to give one case study. I was chatting with someone from the Rook community that has been using Rook for at least the past five years. He's working with the National Research Platform based on National Science NSF funding. Anyway, he just said, yep, I've got a cluster with three petabytes, almost 240 OSDs for those familiar with Ceph. It's a pretty big cluster. Bigger clusters do exist. Six petabytes of storage. That's not the limit by any means. It's just an example I wanted to show that, yes, you can deploy large storage platforms with this. And I just asked him to say, hey, what can you say about Rook for future users? And I appreciated his quotes here. Rook significantly simplifies our persistent storage needs in Kubernetes by automating Ceph, essentially. And then with Rook, adding and using a new Ceph cluster requires almost no efforts and becomes a trivial task. And that really comes back to this model of how Kubernetes does things where it's desired state. You tell Kubernetes what you want it to look like, and then Rook makes it happen with our operator. You can codify it. You can put it in your Helm charts, your YAML, and anyway, just make it happen. And if you need to deploy multiple of them, you can replicate it very easily. On to next. How do you install Rook? Yeah, first of all, the Rook operator, the Ceph cluster, they do have Helm charts. We have, especially for more common and complex configurations, we have example manifests. But really going to the Rook website, which is Rook.io, and clicking Get Started is the easiest way to kind of get in on the ground floor with Rook. Last year, Travis and I gave a demonstration of installing Rook on a multi-node AWS cluster, and this took about 12 minutes. The link is not great, but it is in the PDF slides for afterwards. All right. And in what environments can Rook be installed? Yeah, in short, anywhere Kubernetes runs. This can be in the cloud. This can be on-premises, virtual hardware, bare metal hardware. And the underlying storage could be disks attached to my nodes. It could also be PVCs. And recently, we've put in support for loopback devices as well for testing. And with the kind of key point to drive home here is that Rook really helps enable this cross-cloud support. All right. Thank you. So what should you know about the architecture? We just want to provide a brief overview. We don't have time to get into a lot of details here, but it's important to note, so Rook as the operator really owns the management of the storage in the cluster, the management of Ceph. At the next layer, we have the CSI driver, so the Ceph CSI driver, which then will provision and mount the storage to your user application pods. So just as you may be familiar with other CSI drivers, that's the way Kubernetes defines that applications plug in their storage for RWX volumes. The CSI driver can actually be used independently of Rook even, but it's just not as integrated as. So Rook just makes all this integrated. And then Ceph provides the data layer. Again, back to this, Ceph already existed. It just wasn't built for Kubernetes initially, but Rook manages it for you, so you don't have to worry about all the details of getting Ceph going. So Ceph provides that hardened data layer. Just a brief overview of what it looks like when you deploy Rook after you have it running. So this is just a view of three nodes, a three node system with pods color coded depending on those different three layers that I had in the previous slide. So the blue pods would be really what we call the Rook pods, where you've got the operator that's the brains of Rook deploying everything. We have an optional discovery component that helps find devices. And then the green pods would be for the CSI driver. So there's a provisioner, there's the plug-ins. So Ceph RBD is for those RWO volumes, Ceph FS for RWX. All those red pods are the various Ceph components. Ceph has a number of demons to, you know, provide the Ceph mons, provide a quorum, and the brains kind of like SCD, they are the brains of the system. The OSDs manage individual disks and store the data on disk. Anyway, lots of those components working together. And it's a lot to, if you have to deploy it individually in Ceph, it's more work manually, but Rook takes care of that for you. So that's just a brief look at that. We have more slides at the end, or we can talk offline about more architecture if you're interested. So Alexander, how can we monitor Ceph once it's up and running? Well, who's heard about Prometheus? Okay. Well, starting off with Ceph patch to that. So Ceph itself has kind of like a dashboard built-in. If anyone wants to look at it, it's, well, I think the more important point as I asked for Prometheus. Oh, the next slide. Yeah, there we go. Prometheus metrics. Thanks. You have Prometheus metrics, but like what else I guess you want to hear? There's Kavana dashboards pre-prepared. You have the metrics alerts are also available from, well, Ceph project. And technically you can also even toggle the flag and they will be automatically deployed in your cluster. You just need Prometheus operator for that. We try to make it as convenient as possible there. So, well, Prometheus metrics. Yeah. And I'll just add that all these metrics come from the Ceph project. Ceph that's been building in these metrics for, you know, since early in the project, they're always looking to add these metrics, make sure we can keep track of what's happening. So you know it's healthy. All right. How can I troubleshoot Rook clusters? Deepika? So now comes the question. I mean, Ceph is hard at times and Rook tries to make life easier there. So as stated, we have troubleshooting guide available at Rook website. There we discuss some scenarios where you can, I mean, if there is some problems at the CSI layer or if there are some problems with Ceph layer. We have scenarios covered. Most of the scenarios that are common are documented in our website. And yeah, you can also reach out to us on kind of Slack channel or at last we can cover that. But we are available and we are there to help you with the troubleshooting of Rook in general. Apart from that, there is an interesting thing that we worked on. There is a CUBE CTL plugin support called crew plugin. So we kind of added Rook Ceph support in it. So if you install the crew plugin, you can do everything that you used to do or you can do with Ceph layer. So any Ceph commands, you can run that. You can check the Ceph status from it. You can check the Rook status from it. You can check the status of the CRDs, everything. And apart from that, you can also do the maintenance operations. If you think that Ceph is hard, we are trying to make it easier with Rook to provide advanced troubleshooting scenarios for Ceph in one line command by using this plugin. So if you run into any scenarios and you want a simplified solution to be available for we are working on the development of this project but it... Thank you, Deepika. So how safe is my data in Ceph? You put your data there and, I think, Blaine, you have this? Very. Very safe. Yeah, Ceph is designed to be more consistent than available. So this means that if the data is not safe, Ceph will sometimes not work. But that is also one of its greatest strengths. All of your data is chopped up into bits, into shards, and then thrown across your partitions, your racks, your nodes, your disks, so that any single or even two failures isn't going to erase all of your data. And the amount of replication that you have is configurable, and over the 12-plus years Ceph's been available, it's proven highly, highly durable, and even in really extreme disasters with the Rook we've seen users recover, like with manual intervention. But for the group, again, speaking, we're trying to add certain of these more complicated recovery steps as well that they are fully automated. In the worst case, you still need to manually run a command, but that most things after that are just automatically done by a script that runs automatically using the crew plugin. Yeah, very rare scenarios, but yes, we want to make sure it just works. All right, what Rook features are we most excited about? So we've each kind of chosen a feature to talk about, so we'll go through these slides now. I think, Blaine, you're first. I'm here to evangelize Kosi, which is the container object storage interface. So this is effectively just the container storage interface, but for object storage. So this allows pods to request and access buckets or blob storage if you're an Azure fan, and aims to make object storage as cloud agnostic, as block and file have been for so long. The alpha release of Kosi was in Kubernetes 1.25. We're continuing to make progress on beta designs. And in Rook, well, in Seth, we have a Kosi driver, and then we're working on adding that to Rook for our v1.12 release. One quick thing to add, as I mentioned, the object bucket claims in the beginning, that's the better version. This is the better version. Yeah. All righty. Deepika. Yeah. Even Alexander covered some scenarios already about the disaster recovery, and in case of failure. But for me, it's the crew plugin. It's simple. It's fundamentally solving the problem of maybe having the expertise, Seth expertise, if you are running a Seth cluster. Rook is simply finding the installation and maintenance and upgrading a Seth cluster aspects of it. But what about we, in rare case, get stuck in some problems with Seth demons. Not everybody is familiar with it at the first glance. So we are automating solutions. For example, there is a unit in Seth called Seth monitors, which does the talking and maintains the network. It's important for them to be in quorum, and if even two of them are out, Seth cluster is drastically affected. So how to restore them by just having one of the good ones available, we have one line command for it. Earlier, we used to kind of have to do manual intervention and kind of bring back the monitors back from there. So those complexities with Seth, we are trying to simplify it with crew plugin, even kind of have a debug pod, where it will have all the debugging tools that we use with Seth available. So you can, it will spin up a debug pod, and it will have the monitor debugging tool. You do the magic with the monitor debug tool, and then you kind of, after the debugging, can again bring back the normal pod, and everything is running smooth. You do not have to manually go and debug things, along with kind of checking the status, any kind of automation around Seth, we can write our own code around it, simplify that for Rook. So for me, it's crew simple, and it's good for Seth. Thanks, Topeka. There's something else about the crew plugin is, you might ask, why do we need a crew plugin if we have an operator? And the pattern we really found is that the operator is really good at desired state operations, like make this happen, but there are just some operations that are, you know, you need to do something one time, you need to do some maintenance operation, and so the crew plugin is really where maintenance operations just make sense. Just tell the operator to stop, even sometimes for some of these operations, and let us do this one time operation, and then start up again. Okay, Alexander. More encryption is good. It's that simple for that, but making the OSDs, which is daughter data in the end, be encrypted as well, and that's happening automatically. More encryption. So keeping your data safe and secure. Yeah, the one that came to mind for me was really the ability to recover from disaster. We've already talked about it some. This picture is from a data center fire that was in France, I believe, a couple of years ago that affected even millions of websites, something like that. Was anybody affected by this fire? Probably some of you, yes. Not a good experience. Yeah, so, I mean, CEP is fundamentally designed to run and span multiple data centers so that you don't have to even depend on a single data center. You can spread your data across, replicated across multiple data centers, and even if a whole data center survives, you can design it so that you can survive those outages. Even at a simpler layer, CEP is designed to keep your data available while you have a loss of a node or individual disks. To some degree, CEP can even keep your data available and online, and you won't even notice if it's down because of those multiple replicas. And then similarly, kind of flipping around the title here, instead of recovery of disaster scenarios, it's the DR features, which CEP supports. So this is the ability basically to mirror your data across clusters. So if you have clusters across geographical areas, typically you'll want to say, oh, I want to mirror this data completely over to another cluster and even be able to have applications that are smart enough to fail over as well in those disaster scenarios where a whole data center goes down and you can just have the live data ready to go in that configuration. So each of those mirroring technologies, it's a whole topic to dive into that we don't have time for today, but those are there. So what if I have a CEP cluster deployed outside of Kubernetes? Can I connect to it from Kubernetes? Well, yes, you can. If you have an existing self-cluster, Rook can basically connect to it, take care of setting up the CSI driver or even in certain, well, there's different modes to accessing an external cluster from Rook's perspective to, for example, even start running certain components of a self-cluster in your Kubernetes cluster. And it's, for the most part, as simple as that, like you give the operator the credentials for the other cluster, tell it where it is. And for the main part, for the CSI driver, at least configuration, that's taken care of. And if you have RJW in the other cluster as well, that's happening automatically as well in regards to the object bucket claims and future cozy. And you can kind of think about it as well if you have a Rook self-cluster that you want to share with multiple Kubernetes clusters, you can do it as well. Maybe a bit obvious to some people, but because one Kubernetes cluster can't directly talk with the network of another Kubernetes cluster, you need to interconnect them. And there's several ways. Either you have a CNI, which allows you to do that. There's what was called, I think, like... Yeah, well, Cilium, for example, with the mesh, yeah, there's other projects as well that mainly focus on just bridging two clusters that even have overlapping networks. Submariner is another one. And for those cases as well, we're looking more and more into them. Submariner, for example, to make that easy as well. Okay, yeah. Just regarding that one central, for example, Rook self-cluster, that's one way some people do it as well. They have one central Rook self-cluster for the storage because, again, with the aspect of Kubernetes abstraction, making it quite easy to run self, like a mon fails, just start up a new one. It's that easy for something like Kubernetes to do that, but in, like, the real-world self-case you don't necessarily have additional nodes where you can just install a component on it. You can do that, but it would most of the time involve manual work, and that's, again, where Kubernetes has this abstraction layer with an operator on top, shines. All right, so can I provision a bucket with an S3 endpoint? Yeah, I feel like this is something we've mentioned and alluded to before. So the current implementation we have for creating a bucket and getting access to it in Rook is with object bucket claims, and this is a similar pattern to PVCs. The bucket gets created when you request it. You get access via a secret. This is, this really ended up kind of being a prototype, and it's still in an alpha API for what became Cozy. It's sort of morphed into Now Cozy, which is a Kubernetes special interest group. And I'm excited that Cozy is finally, finally almost really here. Awesome. Can Rook be configured for clients to access storage outside the cluster? So in other words, you have Rook running and you have clients not a part of this Kubernetes cluster. Alex? Yes? You need to, again, you need to have connectivity from the outside clients with the cluster. So, host network modes or maltosevenous, like a bit of a more and more preference as it can be pretty specific like what network interface or network ranges you want to be added to the self-component containers. As with most things, you need to be able to reach it to access it. And for example, just to elaborate a bit more on what you would need to make accessible depending on the storage type. For example, for block storage, you would need to expose the monitors, the OSDs as well. So there's not just one component. You would have a gateway to access certain types of storage unless for, well, basically, object storage gateway which is a different case because you would, for example, for that case, just expose it via an Ingress node pod service as it's S3 APIs, basically HTTP or slash HTTPS, whatever you make it there. But for example, for the file system type storage, the client also needs direct access to the OSDs, the months and even another component, the MDS, the metadata service. But with, well, host node kind of the quick way to do it, multi-stay more specific way to do it, you can achieve that. All right, thanks. I think we're down to the last couple of questions here real quick and then hopefully we'll have time for your questions. How does Rook keep data available during Kubernetes upgrades? Just to skim over this, Rook does manage pod disruption budgets so that even during a Kubernetes upgrade where nodes are taken offline, we make sure that the nodes are managed so that stuff stays completely available, your data is all available, not just durable but available during these upgrades. How often does Rook have releases? We try to have minor releases on a similar cadence to what Kubernetes has about every four months. We just had 1.11 in March and potentially 1.12 will be in July. Whenever there's a need for a patch, we try to get those out as soon as we can bi-weekly at least just to have that cadence and when there's a critical need, we always get patches out as soon as we can. Fun question, where did the Rook name originate? CASL was the original project name and CASL is a secure place where we wanted to secure data. That's where the theme came from with the Knights and Shining Armor protecting the CASL and of course Rook is the chess piece representing the CASL. Just a little fun background there. Is Rook your next move? So now we'd like to open it up for your questions. I think we have a little time. Orange? Thanks. As I'm totally new to this Rook or any other solution like Rook and Seth, multiple questions, really, really brief. NFS support first of all, as I've seen that the architecture is pretty distributed so there are a lot of components around. The resource footprint would be nice to know because it looks like many components could be awesome, many resources. It depends if the storage is critical, mission critical for your application, for your project or company or not. Like, why should I taking into account that I don't know anything about storage in Kubernetes, why should I choose Rook and Seth over other solutions like maybe Minio or something like that? Rook, do you want to take NFS first? Yeah, I can take the NFS one definitely. So Rook, Seth does have NFS support at least from a cursory level and I'm actually on a team and we're working with Seth folks to make the NFS support more kind of enterprise standard as well. Okay. For another question, why should people choose Rook over something else? I'd say the biggest reasons I hear from people is that you have block object and file all in the same storage platform instead of getting a different solution from different platforms and it's fully open source. You know, Seth belongs to the Linux Foundation Rook's with the CNCF. Fully open source, open community we're open to contributions all the time. And then I think you had one more question. Which one? Resource footprint. Yeah, I mean, Seth is definitely requires some resources and for example, each for each disk that the Seth needs to manage we have say four gigs of memory and so there it would keep you to go along with that. It would take some time to dive into how many resources but yes, it does require some non-trivial resources to run storage. All right. Here in front. So I have also three questions. One it's about upgrading. What do you recommend always upgrade Rook first and then Kubernetes after or take a snapshot before to be better safe than sorry or is it possible and the other one it's about encryption as I know dmcrypt it's only for block device it's also possible for object storage. Yeah. Who wants it? Do you want to handle the dmcrypt file? Yeah. So for the OSD speaking as that's where the data is stored even like for object storage that's the kind of like the from architecture for you as safe works those object storage demons are the w also just start the data into the OSDs so the OSDs need a block device be it well a loop device or a disk or just a partition it's basically where the data is stored in the end. So that's what the encryption there is basically and that's where dmcrypt comes into play as well. Did you have any questions or regarding upgrading more for like a question for example the last time I think what was it again the pot security policies went away in like 124 25 or something and if you didn't upgrade root before or at least disable the option for pot security policies in the chart you wouldn't be able to upgrade any further but that's more of like a how health does upgrade so to say issue well and as far as keeping your data safe during upgrades and should you be worried about it and take snapshots first you can do that I mean snapshots probably a good idea to have take periodically anyway just to sleep better at night right you always want as many to keep your data as safe as possible it is safe with having backups to the backup is even it's always right but I'd say whether or not to do it before upgrades or after or worry about it I'm not sure I've heard that as a significant reason to need to do snapshots first the upgrades have been reliable from what we've seen and and we always yeah I'd say that's a decision you can make if you need if you feel like you need to do snapshots but yeah good question others sure right here and what about performance oh back here performance compared to other things so seph is a software defined storage network so it's going to be it's distributed it performs well when there are a lot of clients communicating with all of the demons that are spread spread across the the cluster so you can get that nice distributed performance as far as how much performance you get it's going to depend on well are you backing the cluster with SSD or NVMe how could your network it is since it is software defined storage you're not going to get the same local volume access as if you're just writing directly to a disk right because there is a layer in between it but people for who need that software defined storage and the consistency and reliability there everybody would always love more performance right but there is a performance cost there that but people are generally it's acceptable for performance for what they need databases is where I'd say it's more of the challenge where the higher performance needs are and databases sometimes have the replication at their layer too so in those cases I'd usually tell people look at the replication at the database layer and not use stuff just use a local disk but all right just from a performance maybe Deepika you could have some insights from RBD site for block storage like that's one of the most common use cases for people running databases on stuff yeah I think so you have to also see that there is replication and consistency being kind of worked out in SEF in general as well but still I think I would say if you are using NVMEs I do not recall but Alexander do you recall I generally but it was performant enough for HDDs I was able to see 100 MVs per second for 3 replicas but there are numbers you can actually use a tool called Redos Bench you can go to SEF website and kind of if you want to see how your disk is performing you can use there is a FS what tool we use FIO sorry FIO on your regular disk you can compare it by kind of using there is a RBD extension for that it's completely documented in SEF so you can run FIO directly on your disk and then also run FIO with RBD and try to compare how it's performing and those numbers are also published on SEF website so you can check that out well thanks I know there's a lot more there we'd be happy to talk to you after and come to the Rook booth in the pavilion but I think we better stop there officially thanks everyone