 Hi, my name's Paul. I work at Red Hat in Storage Engineering. And in this short demo, I'm going to show you the disk replacement process in Red Hat, Cef Storage 5. So what we're looking at right now is the Cef dashboard. And in the top right we can see our OSD count. So you see we've got a small cluster of just six OSDs. Let's have a deeper look into the cluster and we'll look at the physical disks. So within Red Hat Cef Storage 5, the devices are managed by an OSD service specification, which holds the filter. And that filter governs which devices will be used and consumed by Cef as OSDs. So right now we have or I have an OSD specification that will pick devices as they arrive as long as they are of a certain size, which matches my criteria. So let's have a look at the OSDs and we're going to simulate a failure on one of them. So we're going to pick on this guy, RH Cef 5.3 OSD 3. And if we just click back we can see that that is on device SDC. And that's the guy that we're going to mess with. So what I'm doing is to simulate a drive failure is kill the demon. So typically if a device has problems then the OSD demon is going to encounter those media errors and fail and crash. And then over a period of time it'll be marked out from the cluster which will signify that it's no longer available to Cef. So now we can see that what I've done is I've created that kind of condition where we have a down and out OSD. So now we can go through the replacement process. So let's delete it. Now in this case I'm going to keep the OSD ID and I'm going to insert a new drive. So let's go through the delete process. So now delete process has started as we can see on the right hand side. And then once that's complete and we're in a destroyed state I'm going to, which it is now, I'm going to attach a new drive to RH Cef 53. The orchestration layer within Red Hat Cef Storage 5 periodically reaches out to all hosts to look for new devices. And if there are any devices which match an existing service specification it will pick them up and automagically deploy those devices as OSDs. And we can see that's what's happening right now without us making any changes at all. So the device has changed its status from destroyed to down. And we can check to see what's going on in our notifications. Nothing yet. Now it's gone into an in and up state and we can see we balance as kicked in automatically. And that's how you replace the failed drive in Red Hat Cef Storage 5. If you look at the physical disks again, make sure I'm not lying, we can see that OSD 3 is actually now on SDD and it was on SDC. So we can see that we have actually replaced it. It's gone to a new drive within the cluster. Hope you found this useful and you get a chance to try out Red Hat Cef Storage 5. Thanks.