 Tom here from Orange Systems and RAID is not a backup, but it does provide resiliency, which is why we bought a Stornado from 45 Drives. I've not reviewed one of these before or actually had one and this is a fun opportunity. Now this is not sponsored by 45 Drives, but we are a partner of 45 Drives, but we did buy this for a project, specifically a project my friends over at CNWR. So this is gonna be a project to replace an aging server that they have. It's reaching into life and why not go with something a little bit flashy and fast? So we get the Stornado involved. Now this is a 2U, 32 Bay, two and a half inch front loaded SSD storage server. This is a little bit of a departure from 45 Drives, other servers where they're top loaded and 4U. I really like those, they just are good, high density storage. This one gives you 32 front loaded Bays, but then still two boot drives in the rear. So technically it holds 34 drives in total. Now ejection rod labeling for easy drive location. I really like the way they design this. This is some good engineering here to make it fully tool list to access not only the drives in the front, but the two drives in the rear. This is really easy to slide the drives in and out. The ejection rods as they refer to them are labeled. So I know exactly which drive I'm pressing the button for. And it's also again, silk screened on the lid and the faceplate that comes down. So this is really easy to see. Now this is completely tool list with an exception. I say tool list, but only for the drives. If you wanna get inside it, so someone may split hairs here and fair enough, you do have to take some screws out if you wanna replace a fan, for example, but you don't have to take any screws out to replace the dual hot swappable power supplies. Now we bought this system with 120 gigs of RAM because ZFS is not RAM hungry. That's something people often think of when they think of ZFS, it is just RAM efficient. It doesn't leave RAM left just to be doing nothing. So the more memory you put in it, the more it will use for the adaptive read cache of the Arc. I have a whole video where I dive deeper into that as a topic as well. That's one of the real efficiencies of ZFS is if you have a lot of memory and you have lots of files that are being accessed or lots of data I should say that's being accessed frequently and you can then pull that data into the Arc and serve it up for memory as opposed to the drive. So it's a great read cache and it gives you a great speed boost, but it's not required that you have a ton of memory just to have a ZFS system. It just adds to the speed of that system. But we put in only six drives and we're going to expand them later. And I know there's always some debate about ZFS expandability and I have a video about how you expand ZFS. The good news is with six drives now and the VDev is six drives wide, we can add six more drives later because you expand VDev symmetrically. There's a lot more details than that in that other video, but that's the simplest answer I'll give for now. Now that means we can keep expanding six drives, six drives until we've done this five times and we run out of slots and buy another stornator. But that still leave us a couple if we want to put some hot spares in there. But this is just a simple way to kind of think about it when you can buy these drives now and stop by the data center later when maybe the drive prices have come down a little and your storage needs have gone up a bit and just pop some more drives in. It's pretty easy to do and it's a hot swap drive bay so no downtime needed to do this. But I wanted to talk about the unit, talk about the use case and see if there's any questions people have that they want to know more about this. Now it's just going to run shurnass and I'll throw some numbers up here to see some of the IOPS scores I'm getting because this is going to be a virtualization target. I'm testing it with XCP and G but actually in production for where this is going it's going to be running as a storage chart for ESXi but it works fine either way. Shurnass is very comfortable serving both worlds is a storage target. Now why shurnass and not something else has comes down to manageability of this particular project. It's what the old server was. It's what the people who manage this are familiar with. So we're staying with shurnass on it specifically shurnass core because it's only a mass and doesn't have any other functionality that is desired to be on it right now or plans to be on it. So it's a very dedicated appliance and I just think shurnass core is solid for enterprise level performance that you can expect like it's predictable and some people may think that's boring but boring is good when you just want high performance drives and that's where this is going to fill in that gap. But nonetheless, love hearing from you leave your thoughts and comments down below and if there's some more testing or questions you have or maybe we'll do a follow-up video of talking about using it with ESXi and I'll bring Jason from CNWR on because he is a way better expert at ESXi than I am. I generally lean towards XCPNG. I'm just not enough to declare myself an expert on the other side, but always love hearing from you. Leave your thoughts and comments down below or head on my forums for a more in-depth discussion. Thanks.