 Good morning. Good afternoon. Good evening wherever you're hailing from welcome to another data services office hour I am Chris short host of the most year of red hat live streaming I am joined by the one and only Chris bloom Chris How are things on your side of the world? It is great. It's great to be back in the show. I was gonna say yeah, you haven't been on in a while I Haven't been on a while. Yeah This is I think the first time back here in Berlin I have my my internet is going to be fine for the next hour for those that weren't aware Chris had to move himself because of bad internet at his home his home So he had to move himself to an island in the Atlantic where fiber passes through so he could do his job Yeah, I was almost the only reason to move there Well, you know What do you expect from my island in the middle of the Atlantic fish that has good fiber. Yeah, yeah, definitely So I looked on a calendar and I found out something really weird We've been talking about ODF and OCS and storage and data services for a year now Wow and so I thought it's going going to be a time to actually reflect a little bit on our voyage When we started this channel, we wanted to teach Chris short about storage Have we succeeded what does he really know? We probably have to test us in another quiz show, but Good boy. Yeah, so What I want to do today is I prepared a little video That is like all remixed What we did a year ago and My goal is to reflect on that and look what what's possible today and And for that Yeah, the video is like five minutes in total We're going to stop at certain intervals and then look back. We're going to going to see that because short sometimes a little bit annoyed And it happens and it was a Difficult show because you brought some hardware around your ISV cluster that was By then not actually a supported platform and to make my this worse We already also picked an OCS version that wasn't released yet so Learn with Red Hat software that it's like there's a certain time when I should not use it, right? Like I've been trying to stand up single node clusters and get virtualization working And that's very much tech preview and it's very much not working on my box So it's one of those things where it's like there is that happy medium of tech preview and working for me I feel like sometimes Yeah, and and the thing is that We had to use it else your RGV installation wouldn't have worked at all But then on the other hand we were using pre-release bits, so that's not fun either So it was like yeah, what do we do? How do we feel and it it worked after? We spend a bunch of time on it But yeah Nice you want me hit play on this thing Yes, please do and then we stop at the next city. Yeah, you know What's funny is I said we were all set and I lied Because I was wrong scene when we started so just give me two seconds folks So see it's we're very brave earlier. I was very brave. I did something I've never done before and I said We're set but now we are set and the video This is a fresh cluster. I use the Assisted installer to spin up a cluster here on an old del rh 20 that I bought because I need to run clusters in my house Now I has part of my job So I've installed The open shift container storage operator and the local storage operator So it's it's all these this cluster is six KVM VMs Just running in a fedora host right for door 33 is the hypervisor here. So it's Super unusual setup, but open shift should be kind of the normalizing factor across all of it, right? Like I should be able to install these operators and off I go, but We're having some weird issues because my discs aren't being picked up correctly Is that the right way to explain it and audios on all right? So I Was just saying We were so young back then Like that was back when I was in my basement Unfinished there is now frames and wires and plumbing is starting and all that fun stuff. So, you know, eventually I'll get a nice finished office Not a temporary guest room You're So if you guys were Curious that this is the show from a year ago We can post a link in the chat and One thing I want to reiterate again is that this was A shell on hardware that wasn't supported back then and we did that on pre-release hot software, so Um The the OCS version that we used back then was 4.6, which isn't too far behind nowadays We're on 4.8. So don't be surprised about that and The video of this installation was actually done in November 2020. So not really a year ago, but Really close and So nowadays we have 408 and I'm going to try to share my screen again and then show you How that looks like today, so this is a 4.8 cluster You mind zooming in a little bit though So this is on vSphere, so this is fully supported Platform now we also support hv by the way now and so One thing that was different earlier was I told Chris Place install the OCS operator and place install the local storage operator. Yes, we do anything great That was like a critical step Exactly at least for my infrastructure. Yes, and since we we talked about the UX we have simplified that now So if we Install the open-shift container search now It will on nice. Yeah, we'll automatically select the right channel Well, we've had that before but what I think we hadn't had back then was it will automatically create the namespace it will automatically market for Monitoring and everything that you had to do manually before so it does that by itself and this I Think we had a similar Look here similar user experience when you install OCS, but What's probably new is now when this is finished it will prompt us to create a storage cluster and We can't really Forget about doing that anymore Because even if I skip the step If I come back to the operator later, you will see a big banner telling you, oh, please please do that No, there's a thing you need You've come this far go all the way Go all the way yes and You don't need to Install or know about local storage operator beforehand it will also prompt you when it sees that this is necessary and We can see that in in a second and If memory serves me right we also had like some resource constraints as well Yes, that I had not taken into account but Now my understanding is there are smaller install bases available for OCS, right? Yeah, we worked on that. I mean it's not a huge set for a regular installation because our target is make Make your storage consumption as easy as possible, right? And we don't want to come back directly and tell you well now you've You've done that now. Please give me more right exactly. Yeah. Yeah, but for a very small install basis We have added more features where we can say we disable features So for example, you only have plot storage Which is the most commonly requested type of storage that saves you a ton of resources or We're also working on Making the operator fit for the hedge. Well, you only have a three-node cluster and Where we want to see ODF as well and for that we will also look at reduced resources for that Nice So you already see this here We've gone further in the operator. It's still installing so the button is grayed out But it's already telling us like oh you will create need a storage cluster to to go further Um, so if we if we're impatient and we've skipped a step now If we go back to installed operators, we see it here and still installing but if I click it It will prompt us. Oh nice. Now create a storage cluster. Don't do anything else If you have a vulnerable where? Well, I can't use the the storage right now. Where is it? um, so Let me just wait a little bit more uh Here there we go The operators are running. Okay, let me let me try that. I don't want to mess it out. It's set installed I saw it. I saw the green check mark on the operator out screen. So you're good So since this is ipi, we could go the very easy route with internal if we have a storage class that provides us storage This was always the case since the our very first version of ocs 4 um, and this was would work in here as well, but What we did chris and I Was we wanted to do internal attached devices And as you can see here now, it's detected that our lso operator is not installed So what we can do is we can install that Now with a click of a button And we get directly forwarded to the install page of the local storage operators. It's just clicking buttons that are there Always click on the blue buttons and that will be fine Yeah And if you notice folks, this is something that I like to reiterate every once in a while On air like the namespaces start with open shift dash. That is a system namespace, right? Like think of it that as like this Like you should not be mucking around in here unless you know what you're doing kind of thing on your namespace. Ooh, what you do changing resolutions on me Sorry It is um So, yeah, like A lot of people think oh, well it's here. I might as well use it and that's not always the case right like we have Operators to install monitoring stacks for things that are not the cluster So we encourage you not to use The open shift monitoring bits for anything other than open shift And then use an operator for monitoring specifically To monitor all the other things same with storage, right? You see open shift storage That is not like a Playground for you to just do whatever and that means an operator is doing something in there And whatever you do is probably going to get wiped out by that operator in most cases so Before we proceed, maybe we um, we start up the Um the video again because now we're at this stage where we want to install the storage cluster and we want to use the attached devices And this will this will look familiar. So, uh, maybe you just play the video again and we come back All right Like so what we already have in this OCP 4.6.1 version Is when you create a storage cluster, maybe Chris you you want to show that quickly? You do get a new nice interface. So if you've installed OCS before And this side actually looks very similar to what you had before so this would Use OCS internal and it would And it would try to pick up a storage glass and provision storage based on that storage glass So that was that has worked before uh natively already with aws and a couple of other cloud providers vmware um And what's new now is on a top you have that select mode and there you can now select internal and external and um There we have integrated storage attached so you don't see it now Yeah, we have integrated the magic of the local storage operator and using that um, it can theoretically scan for this and bring that all In there automatically what you previously had to do manually So the problem i'm gonna hit next year because So like, you know, I have three available worker nodes. Uh, I'll note next and see what happens. Oh So we get to this wonderful screen where you set up your local volume set So I'm just going to call this LVS 000 Three zeros is enough Yes, select all three nodes. The disc type is ssd uh minimum node requirement is three Only zero nodes match to selected filters. Please adjust This one. Yeah, that's it and we're back Sorry Chris was refilling his coffee. So what did we see there? um, we saw that You were trying to install it and um, you might have heard this at the very end. Chris was, um, annoyed You can tell sometimes when I get annoyed. That's amazing Like I'm trying to maintain my professional composure, but damn it This thing won't work exactly so It was due to the unsupported platform But on the other hand also um A problem that we saw on other platforms as well, um, like on vmware. We, uh, we also see that um Depending on what kind of data store you use like sometimes it's a vSAN even, um, that this don't get marked as ssd so even Even in my classroom vSAN is based on NVMe disc So the fastest you can get but, um Due to the communication between the linux layer and the virtualized hotware layer Uh linux things these are not flash disc yeah Which is unfortunate, um because, um, that I mean it Steph then also detects some it's hgds and it doesn't do all of the modifications that like It it decreases, um a lot of caches. It tries to tune to finish. Yeah, it takes advantage of the faster disk, right? Yeah And um because we want to support the openstift platform in itself For the logging for example, um, we We have so far not supported odf on spinning disk And now this provides the issue that in that particular version we force flashes But then we noticed that a lot of platforms they don't fully mark The flash this that flash flash this and all the customers came back and said well What do I do? uh, um, we have a fix um, and and the end of the video Which is gonna Show you briefly It it took that rest of the meeting, but then we succeeded um, but it's It takes a lot of time to do it is cumbersome you need to go into The command line you patch things left and right you need to know a lot about your hotware And that's not in the spirit of odf. We want to make it easy so that Someone can do it on a live show Yeah, exactly. Someone can spend up a cluster almost immediately before the show and do it You're teasing me a little bit It's a good tease a little bit. I like the fact that it's like we're playing the video and your cluster Finally finished installing it. It's good stuff with live streaming. It's right ahead for sure um, so Yeah, so what we're going to do um in excess. I'm going to show you how this looks like nowadays So, um There's a much better experience nowadays trust me. Yeah, let me just prepare two more things here this Are you screen sharing should you be sure I'm not I'm not I will in a second. Oh, I actually I can start it That way you can already look at this So what I did While you were all looking at a video is I actually added machine sets to my cluster I have a little script that helps me with these kind of things so If we go to compute machine set you will you will see that I have a working machine set and have an ocs machine set and in the ocs machine set I have a machine size that fits to what we need for odf And I also apply the right node label and a lot of people underestimate that because They say well, I have workers. I can just put odf up there as well the problem is sometimes in production You lose nodes and you're not supposed to treat your nodes as pets Are supposed to treat them as cattle and the machine sets are supposed to help you with that so Supposedly you you lose one of your workers and you have several you lose it and it's one of your odf nodes The problem is the machine set will recreate a node But the new node will not have the odf label So odf doesn't know that that node is supposed to be an odf node So that's why I always recommend creating new machine set you can Can be just the same thing as the the worker machine set, but um add the ocs label to it And you see it we can scroll to it. It's obviously a little bit uh zoomed in but um You can zoom out of this one layer if you want Or one level I guess There you go It's the animal people just kind of gloss over it sometimes Yeah No, but Theoretically you can put it no more in here. Oh, I think I think this is supposed to be where it is So I'm I'm saying Something that I didn't do Guess I need to update my script, but please please do that In production so that your new nodes whenever you create it have the right label So um with these new nodes Um, what I'm going to do is I go back to my operator In the right namespace. Okay, go storage cluster and I finally actually create that I go to attach devices And now this looks very similar to what we saw with chris um, and in the background it installs Sets up the lso stuff Well, I think about a new cool name um, and the the new thing now which Is we have ht here So that would have saved chris short a lot of time We can keep it at all Um, and we we can even say well, what kind of disk are we expecting? So sometimes especially in bare metal environments We um have nodes that for example have multiple types of nvmes And we want to uh use a particular type And the easiest way to do it is by selecting it by size You select the minimum and maximum size And then what's going to pick exactly that? Um, but I'm just going to leave it like this this rule Um, use any disk or this at least a gigabyte in size Um, and it can even use partitions now And um, this is all five and it detected three nodes That's right six six this cool Let's go next and it will prompt us now. So we have um the r features now we can have Um a stretch cluster where we have two locations that actually have data and then An arbiter nodes location or sometimes it's called the witness Um, and that can be in a third side. That's just very minimal Some people even think about putting that into a public cloud Um, but that way you can have two sides with data Apparently a lot of people have that Um, so we can have that we don't need that right now. So we sure I can continue Um This would be this tech preview. Yeah the the arbiter a bit. Yeah Now it's going to uh pair the nodes and it is And then eventually they come up here Yeah, so the operator's in the background doing his work, right? Finding disks making them ready that kind of thing Exactly and yeah Means if we would have waited about a year, um We could have saved a lot of time We could have but then we wouldn't have been able to kick the tires and You know since the frustration When um When we're started with open shift tv, I remember that um for short was explaining to me How have the show works and the surprising thing for me was We were allowed to fail Yes, people like it people like race failure on the show on this channel the whole like please People have told us straight up teaches them how to troubleshoot which is hard to get experience on and then it shows people how we would troubleshoot things So like it's when when and when stuff fails Believe it or not And and that is why um, I like doing like like things because um, I know things are going to fail eventually Right now and so it's all going good. We have our We can click on next this is now new All of this page is new We can now do a cluster wide encryption and also storage cost encryption That's awesome So we can either encrypt the Or both we can we can encrypt the address the disks that are used by odf But we can also in addition encrypt the disk inside of odf so that when when someone would be able to um Get access to those you wouldn't be able to use them If someone popped your cluster somehow you'd be all right Yeah, and we can use an external key management uh system for that Surely we only support curious what uh, yeah, I was just about to ask what what do you use support their fault? Yeah, just hash it go for vault at this moment. That's a good one. We we are are thinking about um supporting more and um, but the basic stuff they address the encryption cost away the encryption This is just that's awesome. Just the checkbox. So you don't need to sort of any difficult stuff It's just this checkbox as long as it's ticked you're very safe um, and The other thing is networking. Yeah, so we've always had the default networking type where we We just had those boxes using the default network, but now We've seen that sometimes people don't really have fast network um, so sometimes they're limited by a one gig or 10 gig connection and When you're really using storage that can actually saturate that network With severe downsides to the availability everything else Yeah, wow everything is fighting. Um for the network So, uh, what we we also have is multis now and with multis We can separate the networks for the public so between the the the pots that use the storage And storage and the cluster network, which is an internal network just used for recovery traffic And so we can use up to two specific nicks Um to route traffic of the year and an extra route Do you need to turn on do not disturb real quick? Yeah, let me do that. Yeah, just might as well. I feel like there's a conversation going there Here and we're back So, um, what I've seen, um, if if we talk about VMware in particular is We've had a customer that had actually very Very low bandwidth on the regular VMware environment But then they were able to add a VMware switch So the VMware switch was just able to communicate between the VMs, but it's super fast. Oh, yeah, lightning fast Yeah And so they were able to do the all the storage communication Over that network and just have really good performance storage That wouldn't take away the regular network performance But yeah, that's that's enough right now. Um, we get a little review. We see how much Total cpn memory we have we don't have zones here because I didn't add to certain labels Um, but we are enabling encryption and we're using the full network Cool That's pretty much it. We can we can look at the pods. We will see that pods will Will spawn up and they will be created We will also see that we have pvcs that are being created eventually They being storage or local storage No in here, it just takes a little bit longer. Yeah, okay, cool Wasn't sure Oh, you can also keep looking at the pods doesn't be better. Watch the pods better than watching nothing I like Earlier I had um an ssa who was uh Was telling me hey my my OCS operator as soon as I install something It's it's spinning into not ready Um, and I told them well, it's always been that way, but I think not a lot of people they noticed but um, the OCS operator has A rather funny Ready switch which is actually looking for a file inside of the container Um, and that file is being treated as a little busy switch So as long as um, the file is not there you can usually assume that Um, the operator is working on something So it's totally fine if the The OCS operator goes into not ready. Uh, why actually doing something. There's nothing wrong with the OCS operator Doing stuff Tell me one for the last part of the video Yeah, just for the fun. Um, we we've now seen the happy route um That's that's what's that route that makes chris happy. Yay I'm hopefully both chris. Yes chris square So, um, I've also added a little part of uh last year's video Um, wherever trying to fix the problem. So what we did back then was Um, using a machine config We wrote a unit rule which would rewrite the local devices To ssd. So we would just say well the boasters are actually ssd, which they were they were and Um, after about I think 45 minutes we got that to work Um, and I shortened it down to a minute Okay, so 45 minutes of work in one minute of video Yeah, they should be very chip monkey. Yeah. Yeah. Yeah. Okay. Cool. Let's let's fire that up. You cool with that? Sure, right so, but this this issue could crop up for folks that are Setting up clusters and tests or dev or you know, wherever but yeah So what we'll try today unless there are more questions. We're going to write a udef rule um with The help of one of my colleagues. He's tried that before we haven't tried that yet So this is similar to the cooking stuff. Just during the office hours We're going to try to apply a udef rule that will relabel Chris's this the ssd And then they should be able to be used here Yes cool Ta-da We got it working. Um, ultimately it was you know a good experience for me to learn about how Storage works within kubernetes how storage works within ocs odf And it kind of like Maybe open chris's eyes to the idea of oh, we could do better Well, I always try to to do better but um Yeah, it's it's always different when when you can confront it with actual real-world problems and in the first hand like okay Nice nice thinking for example the flash this it makes sense. Um, that we only support flash this But in some terms, it's not realistic to expect that you should always mark this flash right And you need some kind of way to circumvent that And detect this that might not be marked as hd's as ssd's And mark them as flash this and now we we got a solution for that. I mean that's not the only improvement we did in here, but As you've seen Hopefully, um, we've made the install experience a lot Easier especially for people that don't want to read the documentation What are you saying I don't read the docs that's so true I Well in a perfect world, you shouldn't need to read the documentation in a perfect world. You're right. Um But no, there was a lot of docs I probably should have read to get a better understanding of how ocs did things before we did that show. You're absolutely right But it's not it's not only you but um the The thinking of the product always comes from Your first couple of minutes with the product you you get a feel of the product and if if you run into issues like we did back last year um, then um It's it's possible. You just file file ocs odf away and say well, um, just didn't work yeah um, and you know to an extent I'm like In that camp because for me and my needs odf is going to overkill, right? Like I'm not going to need all that. I'm just my local cluster here in the house, right? It's a lot. Uh, I don't have multiple servers. I have one. It's all virtualized Etc. Etc. Right like no vm where it's all just straight up kvm, you know So it's a very unique setup for me specifically but now with like the local storage operator. It's way easier to do this compared to when we started it yeah, and um All in all we we added a couple of features like encryption and we uh, we also added a lot of the dr and backup capabilities. Um, but most importantly we also made the um The user experience using the product we made that easier now and we listened to our customers and We hope that the install experience is now To a degree that everyone could do it even without looking at the documentation Great. Yes, that would be perfect There's so many like there's so many different like configurations for storage though, right like Do you feel like all of those situations or the like the 80% rule of situations is met here? I mean happy advance to that point in the past year Um, have you been brought in on some oddball things? It's always interesting. Um, I'm just going to start the screen share now So this is install you've you've seen that Um It's just going to show you the pvc is one in the road time You see how We're now using those discs now actually gave each vm to 100 gig this It detected all to this And we now have um all of them here. They're available And nuba is always the first user of our new storage using the rbd class um One new thing in 4.8 is um, you used to have the dashboard for storage up here but now It's over there. Um We see it down here storage overview. So it's man. It's a lot more in depth, right? Like it shows you more There's so much more on the screen now Yeah So they they followed back down here. Um, all right, this is for Still installing currently, but um nuba. Yeah This is going to be um fine eventually Um, but this installation was was okay. So I can I can stop the speed share cool, um Yeah, so we we try to focus on the the regular experience of a user and um I I think that outside case i'll freely admit that uh No, no, no We've we've actually in our essay community We had a lot of people that had a very similar setup to yours and wanted to run odf on that Um, because they that's the hot way we have around and we want to we need to demo it And they wanted to to install it there play around with it. So totally fine, but We have our internal mailing list Where our essay is to come back with customer questions and um What i've seen is that over the last year The amount of exotic questions like I found this Thing here and we are trying to attach it and we want to stretch it over multiple sides and It's going to fail over. Um, can we support that? So, right? The number of these exotic requests has increased whereas, um Regular requests where people are asking easy questions. They've gone down to pretty much zero um That's nice. That has to be a fulfilling sensation right like Now i'm only out here in the wild The the the city is fine behind me kind of thing right like you've built the city And now you're kind of like hey everybody out in the wild. How you doing? You want to come you want to come in? Come on Well, I don't think that way because I think um Well, why why shouldn't we be able to support these? Right? Um these wild ideas. Um, and sometimes it's it's nice to have these um look back moments just like this meeting where we look back at the last year and um compare that um but yeah, I I don't lean back and say well, we're done now and um These are crazy ideas because we we can also see how our customers um, they see a lot of features from our competitors and um, they start demanding more things and the maturity of our customers in the kubernetes OpenShift world also increases where a year ago people were starting their um Their open shift clusters and they were starting to put production Systems into their clusters But now they have their production systems and they start asking These difficult questions What about the backups? What about resilience? What about highly available stuff? And um, so far kubernetes, uh, just said well No problem, but you get highly availability by um having multiple parts And then when one fails it's going to spin back up um And that that's nice If you can build your architecture like that, um and Some some bigger customers. They still meet multiple sides and they need to be able to fail over between them and um Even though a lot of people claim they can do it. It's actually a very hard problem. Yeah. No, it really is. Um It's a hard problem goodness at it better And um following up on the ux topic for the user experience um We have um htm, right the advanced cluster manager and this is our answer of managing multiple clusters yeah, just one um platform that it can manage multiple open shift clusters and can install them configure them and even deploy your workloads and We're going to put our dr story right in there so that dr Is just as simple as our odf install right now um that A chris short can do it without reading the documentation eventually or even knowing a lot about storage Or even that yes Because I mean let's face it right a lot of times our customers are in that position where it's like You're in charge of open shift and like their skill sets Don't necessarily include storage or somewhat limited when it comes to storage. So You know a lot of the the linux admins of of review olden days are now open shift admins and Making that experience with storage is Nice as you have I think is Very very beneficial to those folks Yeah, and we try not to be limited by this this label storage And that's also why we renamed the product where the open shift data foundation And on top of that foundation, we want to place data services that help you with your daily work and Chris has already met A lot of my colleagues that work with data services that Do very cool demos With smart city and with x-rays that automatically detect illnesses on x-rays And we we have a couple of more demos coming coming up, but That is our goal that we're not really needing to understand the storage We don't need to necessarily talk about the IOPS we can We don't need to totally feel yeah But we have this trusted foundation Where we know this is going to scale and We just released a new version of rhcs. So red hat self storage standalone product that doesn't really have anything to do with open shift and With that we simplified installation there as well but We we want to continue to invest in that because that can bring us the scale of capabilities For example, we we always do our our fancy testing with the object storage in there my colleague run Is he's doing the the billion objects and I think the last one was 10 billion object He put into that object storage And then we we do the performance test and we can see that Without doing any modifications in the storage We scale it for for the target number of objects, but without any further additions the The fourth performance is linear you add more More objects and you never hit a spot where it just breaks down you don't have any of those caching artifacts where You get the first thousand objects fine and then it drops or something that we don't get that So you can build that to a very large scale and That's that's power of this data foundation You can just build it and put whatever you want on it Which is totally awesome Um Any questions from the audience now's your chance if you haven't asked already, please do any storage questions for chris or any open shift questions for myself Feel free to ask Or if you've been around for a year. Yeah, if you've been watching us for a year Just say hello That would be cool. I remember this Boy chris was mad I Am okay with admitting sometimes that I get frustrated I can understand sometimes too easily right like I've been working on it trust me folks So far you've never given up and that's the most important thing Right like I will never throw my hands up and be like well, this is a ridge too far because it's just technology We can figure this out I say it's just technology But it was built by humans and there's no reason why us humans can't figure it out. That's basically kind of my Whole modus operandi if it's built by humans, then I as a human should be able to figure this out Well, if there are no questions, um, then one one of the cool features that I might want to highlight is um We we have storage glasses and we we actually have a bunch of storage glasses when you Install odf. So let me let me show that again um So These are our storage glasses. So it doesn't it's the one we get from ls. Oh, there's our logo this Um, this is the one we get for reamver Due to the ipi install and all of these here are being created by odf and These are for different use cases So these down here are to create object buckets These one is for block devices or file system Read once And this is for the read write mini Now one feature that we just added is You can use the storage class and you can write annotations to the storage class And then use different pools inside of your staff cluster inside of odf. So that's a very advanced topic But we're the folks that knows that they know that um, you can have different pools and with the pools you can add um different storage Let's say policies And you can say for example only replicate twice or only use storage in a certain location Or um separate out data more or less So you could say, um, I have two data centers with each with three racks so uh one pool could be using just that one location and the other pool could be Uh trying to Pies replicas in both data centers so that when one data center fails Your data survives And you can store that in uh storage class annotations now. Um, and That way you can get pretty fancy with these advanced topics. So Right now this is as I said very advanced. I can't uh Say that enough But it's supported right um, but we're trying to bring a lot of these topics back into the um into the regular UI interface so that regular folks Can use that as well. All right got some questions here in chat All right, what happens when you delete a storage class that still has pvs in it? So I can actually delete a source. Yeah, I was about to say why don't you just do it see what happens Yeah, well right now we well we we do have one claim here in rvd um here so the db nuba claim and that is using the rvd search costs And I can I can delete I think it's going to be recreated immediately. Um, so it's not like you But The storage class itself is not really what has information here the information in a storage class here you see being recreated automatically by the operator because it says you need it But you can delete it There's no Direct mapping from the storage. The storage class is only used when you create a pvc to figure out who to talk to To dynamically create a matching pv sense If that makes sense and I think there's another question Yeah, so Next question and hello hosé. Uh welcome. Please feel free to ask your question hosé Um But a follow-up question. Will it be possible in the future to have odf spread over three clusters two active and one arbiter? um Is that a roadmap item or do you even have it out there yet? I'm not sure what you mean with three clusters like we have already now with uh with our dr capabilities. We have the option of three sites um so You have notes in side a and side b And side c and side c can be the one where you have arbiter Very small and then you have side a and side b where you have your x-hole data um, but if you have three separate open shift clusters then um That's going to be tricky. Yeah, and yeah, we don't do that. Um And there's reasons why right like I think the idea is to build a cluster robust enough to Fill that you know use case as opposed to building multiple clusters to fill that use case, but Penguin whisper. I am curious what your use case is Well, what you can do when I when I think about it if you mean three separate clusters Yes, he really means three separate clusters not one stretch cluster. Yeah So if you have that One possibility, but it depends a little bit on what kind of storage you need And how your replication has to look like all those nodes are connected to each other that too, right? Uh I mean, there's there's network layer there somewhere Yeah, but then we talk about synchronous and asynchronous replication, but what what he can look at is um deploying odf in external mode and Just use a dedicated rcs cluster um, and then you can have an rcs cluster can either either also be stretched Or that one can replicate between the three starts you could do that a good point The idea is okay, so some more follow-up the idea is if the full cluster goes down That's not easy to recover from true I mean, essentially your whole operation ends But the three separate clusters you mitigate some blast radius Yeah, but this is exactly what we When we talk about dr topics, um where you have Asking for this replication for example and you have these clusters And you want to have a backup backup cluster so that When you act of cluster goes down you can switch over to that. Um, so that's definitely insight for our Dr topics and we're working on that and We'll have really good UX for that, but it's Too much for for today Fair enough and we are coming up on time. So um Flash statement here. Let's see if it's deployed in external mode. The replication has to be set up manually, right? that I don't know um Right now Yes, but it's even trickier than that. So the storage replication with rcs is not that tricky But what's tricky is getting the metadata of the open shift pvs and the application over as well And that is being handled via our dr topics so, um I have to ask you to just wait a little bit longer And and and see that um, what you're describing is Probably handled by our regional dr Yeah, the reason he's asking is that he they actually did have a cluster that got completely Trashed and that would be disastrous in production Yes, it would be 100 and we are aware of that. Yeah, and you're working on it. So that's good. We're working on it We just want to make it extra easy to set up and So that it actually works when you need it Awesome So, yeah, thank you for hanging when that was a great series of questions there Um, so without further ado, I have a meeting I got to get to Uh, thank you chris. Uh, thank you audience for participating and thank you everybody else for watching Uh, stay safe out there and we'll see you in two weeks with some data science stuff Bye. Bye