 So this is a follow-up video to a question someone had asked. I think it's a really good question. Have you built free NAS systems using consumer hardware before? And I'm like, yes. And have you followed up on those to be interesting to see what happens? Well, I said, why not? We built ours. And we built it a few years ago. So it's pretty relevant, pretty valid. And we use it a lot. We know exactly how much it's used. So this machine was built with a gigabyte motherboard and for sliding it out of the server rack to do it. The only thing server about it, I guess it's in the server rack, but it is a off-the-shelf motherboard, an AMD A6 processor. And we used, for the beginning of this array, it has a HGST UltraStar 7K 3000 in it. Now, these are the two terabyte 64-meg cache drives. They've worked really well. Haven't had any issues with them. They've held up. I think we got them. I mean, they've dropped some in price like $64 now. And I think I paid probably. I know they were under $100. Is that the time we bought these a couple of years ago, that was about the going rate for them. Anyways, drives have held up fine. No failures. Now, we did have, when they shipped us one, we had one bad. But as far as once we've installed them in the system, no failures, no issues been working. Now, the system does have 12 gigs of RAM in it and is running some, let's see. Right now, over time, it's been different things it's been running. But right now, the only thing it runs is our clone deploy. We used to have some other things in the Jails that we're just not using anymore. We'll move them to their own servers. So this is the radar we were talking about. This is the old one. We added a second mirrored set just to duplicate. Actually, pretty much my videos. That's what the Jupyter system does, is all the video I create. I wanted it not saved on my computer. There's a lot of it. Well, just about a terabyte of video that I have on there. So as I create videos, it automatically backs them up. That's a lot newer. But right now, there's currently, and because we constantly are purging it, about 1.8 terabytes on our creatively named four drives raid, because, well, it's a RAID Z2 with four drives. Four drives raid. Anyways, the follow-up is, how are these hard drives held up? How is the system held up? One, systems never crash. Never had a lockup. No, I never recall a problem. We do have a UPS, but through series of happenstance and whatnot, we have had power failures. When we were trying to do something, I unplugged it. So it just was powered off randomly. I do not have it set up properly for the UPS. It's just one of those on my to-do list and didn't do it. And so when the UPS goes into a shutdown mode, which we have a very beefy UPS that lasts for, I think about 20 minutes, 30 minutes, before there's a problem, which is also part of the reason. Unless there's a really long power failure, it generally comes back on before then. So I haven't set it up to power down, but it also hasn't been an issue. Any of the times the power's off has not been some catastrophic failure or recovery for any of the drives because well, ZFS is rock solid. I really have a lot of faith in it. Now, how we use this? Customers bring in computers, they need hard drives replaced. And nobody backs up anything. That's always our theory for all things. So this drive has tons of data being dumped to it from backups of randomness, whatever people have. And then we replace our hard drives and copy it all back. When Windows crashes or whatever, it's just a common before we reload it, we dump it, we boot up off Ubuntu, copy everything in our user folder over, unless they have a special request for other files. Nobody knows where they keep their files. That's always a guessing game with the retail market. But what this does is really taxes it with a lot of little files. And so you see we're on our drive here, Hitachi UltraStar, 73K, so we'll clear that, and we'll filter down to what we wanna see. And we'll look for errors. And as you can see, no errors. So you're talking about just gobs of data. And we have filled the drive up to capacity, and then we have a script we run to purge out stuff after it sits for so long. And after we copy it back to the client, we only keep it for so many days after they pick up the computer, and then we just purge it out of the system. So there's tons of read, write going on and terabytes moving back and forth at any given time due to the volume. I guess we're fairly busy, retail store. We place a few hundred hard drives at least a year, if not more, and plus all the reloads we have to do when Windows gets completely broken and we just back up all their data over here. So lots of little files, terabytes of data, and some people, we've had to back up movie collections for people and do data recoveries and all that stuff. So this drive does really get used and it's running our clone deploy system so that gets run all the time on here. So you can see no errors on the drive and let's look at the, I think it's the hours we wanna see. So, 2,030, 20,000 39 hours. Let's see, let's calculate that in today's. 834 days divided by 365 again. So 2.28 years of run time on this drive, which sounds about right, we built it a couple of years ago and have not had any issues. So, I mean, it's kind of a short video I'm doing here. It's just a follow-up, but all the drives, no issues, no problems. The only thing I did notice about these is I think the temp on them. They've hit a little bit, they run a little hotter, like the Western Digital Reds that I did in another video never get above about 35, 36, I think when, under full load. These get a little bit warmer. You can see the max it's hit, I believe this is the max it's hit is 46, but they generally seem to run a little bit hotter. We have extra fans in our free ass box, but it really hasn't been a major concern because there's like I said, two and a half years of running run time and I don't know, we can do a cycle count here. So it's, they've only been turned on and off 36 times. Yeah, we don't really turn this thing off very often other than when we added some things and that's really about it. There's been power failures that happens over a couple of years of time in storms and things like that. We had one in the last two days of there was just no electricity after a big storm. So it's like I said, a quick follow up on it, not an in depth video, but to show that this system, and we've got other ones out there, but this one's readily available in my server rack. We haven't had any problems with it in over time. I mean, we used to run own cloud on this. We've run a lot of things. We're still running, I haven't switched over. I mean, this is still the free NAS 9.1 series. I don't, after the free NAS 10.1 and kind of the debacle there, I'm a little hesitant on free NAS 9.1, but I'm gonna do some videos on the new free NAS because it does look pretty slick. And I like the way that they've integrated Beehive into it and I'll see how that really plays out and how it works. But in terms of building it on consumer hardware and a look at it two and a half year or 2.2, three years later, I'm not having any issues with it. It's working fine. I know this is not, this is a one machine. We do have more of them out there that we haven't had problems with. And so I can't base my statistics on just one sample set or even a small sample set. But I just want to show that, yes, this is a follow-up on a machine running on its consumer hardware that has never crashed, never had an error, never been taken back apart. The heart still has the same RAM from the day we put it in with 12 gigs of RAM, kind of what we had and a new motherboard and AMD A6. So thanks for watching. If you have other questions about this or want to do another follow-up video on something similar or related to this, like and subscribe for the content here and I'll do some more videos. Thanks.