 So let's dig in a little bit about FreeNAS, RAM and performance and what does it all mean? Now if you're not ever run a FreeNAS before, you may be confused because you'll see all of your RAM has been acquired by the FreeNAS Arc, which is the cache. Now that sometimes confuses people. I've seen, you know, you see the new people in forums because they go, oh my gosh, there must be a memory leak because it keeps using up all the RAM. Well that's exactly how it's designed to work. So let's talk about first how much RAM is needed and why that's a little bit of a fuzzy subject. So FreeNAS requires 8GB of RAM for the base configuration. If you're using plugins or jails, 12GB is a better starting point. There's a lot of advice about how RAM hungry ZFS is and how it requires mass amounts of RAM. And often quoted number is 1GB of RAM per terabyte storage. The reality is it's complicated and this is from the people at FreeNAS. The amount of RAM it needs to be stable does grow at the size of the storage. 8GB of RAM will get you through 24 terabytes. Beyond that, 16GB is a safer minimum. Once you go past 100 terabytes of storage, 32GB is recommended. However, that is just to certify the stability of side of things. ZFS performance lives and dies by its caching. There are no good guidelines for how much cache given storage size with a given simultaneous user will need. You have a 2 terabyte array with 3 users that needs 1GB of cache and a 500 terabyte array with 50 users that needs 8GB of cache. Neither of those snares are likely they are possible. The optimal cache size for an array tends to increase with the size of the array, but outside of that guidance, the only thing we can do is measure and observe as you go. And I'm going to stop reading here, but I'll leave a link to this. This is from their blog post. This is an older one, but the bases haven't really changed that much. This is, I think, from 2015, yeah, published in 2015, the hardware requirements are pretty much the same. And of course, once you start getting into running VMs and jails, you have to really allocate for that. My hardware is not yours, but I'll show you the configuration that we have, what we're running, and how the performance looks on that. I don't have enough RAM. I ordered more. It's been on my to-do list. It works perfectly fine, and it's sometimes one of those isn't broke, isn't having problems at all, so don't fix it. This is an older Core i5 4570 CPU. This is not a new high-performance processor. This is just a fourth-gen i5. They were released in 2013. So this processor is about five years old, and it's nothing really high-performance. So that's the first thing. The reason I'm showing you my hardware is I'm going to show you the benchmarks and show you what the RAM uses is. Now I only have the 16 gigs of RAM in here, and I only run a single jail. I don't really run any VMs on my FreeNAS box, partly because I don't have a lot of memory in it, and partly because I don't really have a need to. I use Zen for all of my other VMs and everything else. The one thing my FreeNAS does do is run Sync thing. Now Sync thing, I've talked about this before on the channel, I run this, and it synchronizes different pools of data that we have. For example, my videos, as I record them, get synced over. It syncs server backups across sites. So I have another one running at home that runs Sync thing. So everything works really well. And for our shared files, when we're doing documents related to the business, they share between our Linux machines, between Sync things. So that's running, using a little bit of memory. It actually doesn't pull that much RAM. It's pretty small. That's kind of the advantage of jails versus running things in a VM. We also use this for our storage, recording customer backups and things like that for the retail store. So when we're backing up computers and reloading them, we still have the retail store. So this is where all that data gets dumped back and forth because nobody backs up anything. So we copy the data there, copy it back. So that use case is there. That's why we have so many different shares. The Unify video is being recorded to this right now. So our NVR is being recorded. There's a syncing data. There's all the backups for the virtual machines and everything else. They get synchronized to here. Even my I still run virtual box for some testing that gets backed up here. So all this is between syncing and just using Samba sharing is getting dumped over here. And then on this side, each one of these storage pools is another set of drives. Here is the VMs that run across Zen storage via iSCSI once again pointing at this machine. This is why I say I probably don't have quite enough memory to consider high performance, but let's talk about the performance I'm getting out of this machine. Now I only have gigabit. So being able to transfer from my computer to the machine is not a great storage test because the speed limit isn't the hard drives, isn't the free NAS. It's the limit of my single drive going across a gigabit link. I can saturate it and that's it. I can't really see the performance. So the performance testing we're going to do is going to be with the virtual machines. So I have this right here, says W9 base, move of the disk on it. It's going here. It's on the storage repository free NAS. And so to satisfy the Windows people, here's another disk going to free NAS. This is a Windows Server 2016, which I have running right here. We're going to do some hardcore read-write tests on here. Now the first thing we're going to cover is let's look at what the memory looks like now. Here is the physical memory utilization, the 16 gigs RAM. You can see it's all used up. Just because it uses as much as possible for the caching. So that's where most was going. And this is a pretty basic, been built into free NAS for a long time and not that great of a way to really visualize it. But good news, they integrated net data in here. So here's the net data, gives us a much better overview. So we're going to dig in here and actually look at like the ZFS system and look at the efficiency in the R kits. Now you can see that in free NAS when you go to reporting ZFS and you can see the hit ratio, but it's just way prettier to look at it here. Now my box that runs in orchestra is connected via iSCSI with a 10 gigabit link. So that gives us pretty reasonable disk performance. The disk we're using our HGST for terabyte drives, the 7200 RPM drives. I can leave a link below. I've talked about these drives before. They've been kind of a favorite of mine. So they're reasonably fast disk. They're set up, and I'll show you right here. Go to storage, send storage. They're set up in a RAID Z2 configuration. So here's those drives, RAID Z2. Here's the iSCSI storage. So kind of got an overview of the system. It's a 10 gigabit link and it's a separate box. So let's run the benchmarks. And I've always liked the phrase, there's lies, damn lies and statistics. And then there's benchmarks. When you start running synthetic benchmarks, especially because of the caching that ZFS does, you can get some wild messages and think your hard drives are way faster than they are. But we're going to go ahead and run this real quick and we'll hit all and we'll see what happens. So I'm going to move this a little bit over, but so you can still see it. And then move this a little bit over and we're going to start seeing the hits and misses here as it loads up the cache and runs a speed test. So it says about 832 meg read pretty fast, but these are part of the way the benchmarks are going. And as it hits, these are the red represents the misses, the greens, the hits, these greens right here. That being said is all the times that it's absolutely being efficient and the memory that is cached in there is pulling back out and giving it what it wants. So caching makes a huge difference in speed. And this is where you can have more memory if you have something that there's a lot of read caching. Now the read write caching does good with some of the VMs because they frequently are going to request the same data, especially when you're running a benchmark. But it doesn't do as good for, for example, when we're just dumping files back and forth to the machine and people, the same people aren't frequently accessing the same files. So your cache performance matters a lot in those circumstances. Your cache performance matters a little when you're doing just dumping data there and dumping data back. So unless you're frequently reading the same files, you don't get the most efficient cache usage, which is why it's only it's mostly red, not green because the benchmark saturating and exceeding the ability for the system to cache it. So we're still seeing pretty much. So we got a read 832, a write of 778. And once it's done, it'll complete and it'll give you what it's actually testing here. By the way, I'm using Crystal Disk Mark 6. It's free. You can download it. I'll leave a link in the description below so you can do some testing yourself on there. But you can see it's pretty reasonable performance. Four kilobytes. I don't, does it really give me the statistics here for what these are? Sequential reads. So sequential reading, writing, these are a bunch of small files, QTAT. I'm not exactly, I don't, not a benchmarker by trade, so I don't know every little detail about what these means. So what do we have here? Okay, random four kilobytes, Q's one, threads one, Q's 32. So once we get into some of the heavier writes, we slow down as expected. And when you're into just the basic sequential read and write, it's really, really fast. So let's see if we change this a little bit and we make it a little bit larger. And let's run this again and see how the cache hits go. So we see it's mostly red when we ran it before here. Once again, we're going to get a different performance statistics by a slight change. So your performance is really gauged a lot by your usage cases. So if your use case is these high read and writes on one gig files versus 50 meg files, you get a different type of performance profile than you would there. And these are just some of those things to consider when you're doing and dealing with ZFS performance. Now let's go ahead and tax the machine a whole lot more. So I have here the Fronix test suite loaded on another machine that's running Debian. And we're just going to go ahead and load up and benchmark this, run all test options, just basically hammer this thing, don't save the results. So now we're running here and we're running here at the same time. And let's look at another statistic here. So go back up. Here's my CPU load. So here's the test we ran before. Here's the next test we ran. And here's the system now. So the CPU is finally getting a little bit more loaded up, which is kind of the plan. We want to really just tax the hell out of this system, which by the way, we haven't really taxed the hell out of it. So from a processor standpoint, and this is why processor is important, that it can decode everything and all these drives are encrypted. So there's that type of action going on while it's running. Just run this simultaneously again while this benchmark's running. Like I said, the goals are really load up the machine, but you're not seeing an absolute massive system load here. It's not absolutely killing the system. So let's go back over here to our prettier benchmarks running both simultaneously, definitely puts under more load, but we're not peeking out my little basic five-year-old i5 doing this. Now, the RAM usage is going up and down because of the cache hits and everything else that are going on. And like I said, this is all just the ZFS side of things. So we're beating it all up with the ZFS. We've actually seen this go from 700 now to 550 because now we're doing simultaneous. And something interesting if you want to see is if we go over here to reporting network, because we're connected over this 10 gigabit, we can see peaks out at like four and five gig transfers going through right here, really intense for running these benchmarks. Just like I said, we're just loading up the system completely. Now we've hit a higher load average on it, but it's still working. So let's go ahead and open up a Samba Share. So while this machine's running, we actually have this Samba Share right here. Let's dump some files to it. So let's go to my video folder. Let's grab a couple of random files here, copy, paste. So now we've loaded the machine up further. Now we have that running absolutely fast. So that was too fast. Let's find something bigger to copy to it. What do I have that's big? Here, 1.8 gigs. It's a little bit bigger. So we'll copy this bigger file, and it's still copying just really fast. We're getting about 73 megs while we're running benchmarks over here. We're running this here. This is still running. The machine's under some heavy load here. Well, let's go ahead and look at our ZFS caching and see how that's doing. Still getting some good hits on the cache. So it's still doing its thing. Let's take a look at memory performance here. So we can see the system going through in memory utilization, CPU utilization. Like I said, this is extreme load. This is synthetic benchmarks running, peeking out the CPU, dumping files across the Samba Share, ice-cozy connection at 10 gig, and a five-year-old i5 with 16 gigs of RAM. I'm going to say that these performance tests are not too bad here. Like I said, this is now testing eight gig files across here. Oh, and because this is running, so how usable is this machine? Well, let's go ahead and open up Explorer and let's, I don't know, let's open up Internet Explorer. There's something that'll just chew the hard drive. While this benchmark is running in the background, so now these are where the cache performance kind of makes sense because it's caching this VM's data over. Yeah, I don't show that message again. We're caching the VM data. This is trying to ruin the cache by running benchmarks, which will kind of fill it up. And you can still see, though, this is a completely usable Windows server and running fine over the ice-cozy. So we go over here to Zen Orchestra. Let's look at our storage pools, Freenas, as pole stats. And we can see the read throughput. So once you get behind the benchmarks, these are what the actual read throughput as it's seen here. And and we got an IO throughput of right speeds of about 500 megs per second here, 550, 550. So those are roughly SSD speeds, especially the reads are even faster back to the caching. So our actual read speed is 781. And these are four spinning 7,200 PM disks. I have not configured this with some of the other options. There are secondary caching layers you can configure with Freenas. You can put an SSD also referred to as an L2 arc cache in here. And there's also slog, which helps the writes and things like that. I'm going to do some more advanced videos later on those, but they have a really good article as you can look up on Freenas. But so let's give you an idea of what the performance looks like running on a five year old i5 with 16 gigs RAM, near SSD performance with a 10 gigabit card and a completely usable machine while running two virtual machines simultaneously benchmarked running synthetic tests at the same time and being able to surf the web inside of a server while these benchmarks and they're still running down here, by the way. So let's open up Chrome establishing secure connection and Chrome is opening up. I mean, granted, it's going to be a little bit slow, but not unusable at all. I mean, this is a completely we're doing stuff. Now, while we're doing stuff, what does this see? So let's go ahead and expand this out. Oh, the following tests failed to perform properly. Yeah, it apparently had an error in here somewhere. So did it give me any stats? And I should have turned down the logging so I had better stats. Apparently, it had some type of error running this. I'm not really sure why. But you can see loading all of this up, even this old i5. And we're going to see the processor drop down to nothing again. And back to use and the use ram because of all the stuff going on went down a little and it'll kind of creep its way back up to compensate for the caches and come back. So hopefully this kind of gives you an idea of the performance I'm getting out of the machine and also kind of the why I feel that it's good enough. And it's been a while since I put memory in it. I know it could use some more. I know I might squeeze a little more performance out of it, but I'm I'm going to tell you this is not bad performance being able to do all these things at once. All the things running, seeing these read and write thorough puts that are near SSD speed while other things are running. It's not like I'm dedicating to this. I'm running two simultaneous benchmarks and it's still doing all the other stuff. It does like running sync thing, like running the Samba shares, being able to copy files over and the Unify NVR has been writing to this the whole time as well, doing the motion record and everything. So all these things, I didn't shut anything down for this demo. It's just to show you that free NAS does not take an absolute killer processor with, you know, 64 or 120 gigs of RAM. Now, this changes a little bit when you're going, hey, I want to run an enterprise storage solution or I'm going to run a database on this to serve a million users or some other high volume, way larger scale than what you may run in your office production system. Then you have to think about it in a little bit different. And that's when you start getting into other performance considerations. But when you're running something that big, you're generally going to be building like a enterprise true NAS server with their support with 128 gigs of RAM and all flash array and things like that. So there are performance considerations, but for the home users, even the small offices and some of the small businesses we set these up on for spinning drives was able to achieve with a 10 gigabit link near SSD speeds with a VM over ice guzzly while running on a five year old processor. So hopefully this kind of gives you some ideas for what kind of performance you can expect out of here. Obviously, the better the hardware, the faster the performance and it is going to get faster as you buy some awesome stuff. But if you don't have awesome things, you want to get started and you're worried about it, if you have a good chip, by the way, one side note, please make sure you have a chip that supports AES and I. I've talked about this on the performance. If you're using crypto drives, encrypted drives have very, very low overhead as long as a chip has the AES coprocessor in it. This does this I five does support that. So that is something to consider. All right. Hopefully this was a enlightening as the performance that you can get out of this. And I didn't do any real tweaking and tuning. I'm sure there is some really fine tuning you could do to make this even more, you know, incredible performance. But this is just kind of the stock set up. Thanks for watching. If you like this video, go ahead and click the thumbs up. Leave us some feedback below to let us know any details, what you like and didn't like as well, because we love hearing a feedback. Or if you just want to say thanks, leave a comment. If you wanted to be notified of new videos as they come out, go ahead and hit the subscribe and the bell icon that lets YouTube know that you're interested in notifications. Hopefully they send them as we've learned with YouTube. Anyways, if you want to contact us for consulting services, you go ahead and hit launch systems dot com. And you can reach out to us for all the projects that we can do and help you. We work with a lot of small businesses, IT companies, even some large companies. And you can farm different work out to us or just hire us as a consultant to help design your network. Also, if you want to help the channel in other ways, we have a Patreon. We have affiliate links. You'll find them in the description. You'll also find recommendations to other affiliate links and things you can sign up for on Lawrence systems dot com. Once again, thanks for watching and I'll see you in the next video.