 They say they are lies, damn lies, and then there's some benchmarks. And it's not that benchmarks are inherently inaccurate, it's often because they test for the wrong things. You really have to tune the benchmark to match your workload. So what I wanna talk today is about storage tuning. Now there's other tools out there for Windows, but I spend more of my time in the Linux world and virtualization world setting things up and we wanna make sure our storage is optimized for that type of workload or even the workload that I'm using right now, which is video editing workload. How do you make sure that your system is tuned for it and how can you validate that tuning? Now, there are extensive tools like Veronica's benchmarks that gives you really cool graphs, but I wanted to have something simple. I've talked about FIO before. FIO is a really simple utility in Linux that is also available on TrueNAS. And I have a little script I put together. You're gonna find the links down below just to make the testing easier in a bash script. Cause I wanted something, you know, simple and repeatable that I can also share with the audience and go, hey, put these parameters in, let me know what you get and then we can come to a common agreement when we're discussing things in a forum of what we're testing for or not just a general speed mark, but the parameters that went into generate that speed. So let's get started. Now we're gonna start by talking about the system and I'll be doing the testing on because that question will probably come up in the comments, but this is my TrueNAS Mini R that has an Intel, Atom, CPU 3758 at 2.2 gigahertz. No, this is not a particularly high-performance CPU, but that makes the test more interesting cause we're gonna use net data, which is an app that we have here in TrueNAS scale to show some of the performance in a more visual way, but we're gonna show how the script runs. But one last thing I wanna comment on is when we look at the storage pool here, we have the flashy and rusty pools. Each one of these have different shares on them. One of the shares on there is going to be my video archive that's on the rusty pool. That's where I store all my videos when I'm done with them. And then we have the flashy, that's just the SSD pool, which is where all the active videos that I'm working on go. I'm using these folders as demos because I've mounted them from my system and we're going to test the speed of each of these different ones. And this is a good way to test the script and compare the difference between the spinning rust and the SSD drives. And yes, I know it's not actually spinning rust anymore. That's a throwback to the way they used to make them. To take a quick look at the storage dashboard, we see that the SSDs are a RAID-Z1, four wide, and the data VDevs on the HDDs are RAID-Z2, eight wide. So the question might be, what is the performance difference between these two when I'm attached to them? Well, that's what we'll test real quick here with this FIO tool. Now, the prerequisites for this script are pretty simple. You need to have FIO and BC installed if you're using a Debian or Ubuntu-based distro after you install FIO and after you install BC. The test directory is a parameter we pass at the command line, the block size. Now, I'm going to set this to one meg and a file size of one gig and a number of files to five. I have some little explainers here, but you can certainly spend a lot more time reading about all of these with some Google search to dive deep into all the different parameters in FIO. But this is a pretty good base for setting up to similar to my video editing load. But if you wanted to change this, and we could change this to like 64K, for example, this would be more indicative of smaller writes that you might see with 64K files, or even if you're running a workload that might be more like a virtualization workload where there's smaller block writes being committed from the virtualization system. So it comes down to what you want it to optimize for. We're going to switch this back to the one meg. And the reason I'm leaving the file size at one gig is because, well, generally speaking, video editing files are much larger as I produced them. Now, down here at the bottom is all the tests that's going to run. It says run these tests and we have a random write, random read, sequential write, sequential read, and then the read write test. You can simply comment any of these out if you don't want them to run. So if you only wanted to do sequential or you only want to do random, just comment the ones out that you don't want. But I'm going to ahead and have it run all the tests. Now, here's the different mounts that I have on my system. So the LTS video is mounted to that flashy pool and this one's mounted to the rusty pool, the LTS video archive. That's why there's so much more storage available on it. We'll run the first test on this LTS video. So we're just going to kick off the tool. I have a directory under that mount called SSD test. All it's going to do is write some files into there that it cleans up later. The last part of the script actually deletes the files but let's go ahead and kick this off and see what the performance looks like. Now, going through the results, you can see that it gives us the test that was run and the parameters it was run with. So we have our random write and then our random read and our sequential writes and sequential reads and then these simultaneous sequential reads and writes down here at the bottom and we see all the different results but let's visualize them a little bit better with net data. Now, looking here at Churnas, we're able to see that the inbound traffic because it was the first block was random writes were CPU intensive but we're able to saturate the inbound network at the 9.3 gigs here. Then we can see that the reads also 8.2. So reads were doing quite well off of these and then we have the sequential writes a little bit different to CPU profile on here but the sequential writes also were able to saturate the bandwidth and then these sequential reads and then the reads and writes here are the peaks and valleys you see and this was all done on an SSD test. So we can see how much CPU load was put on the system for each one of these tests and say, okay, it did quite well. Now we're running the same test again but this time we're pointing at the video archive which is on the rusty pool that is built from HDD. So we're gonna run the same test again and see what the results are. Now something that may surprise people is that eight HDDs actually perform quite close to the same performance level I got out of the SSDs on here but let's go ahead and go back to net data and let's take a closer look and we can see slightly different CPU performance and that comes down to there were more drives to write to so there's a little bit of a different activity as it wrote them on here but nonetheless we were still able to see bandwidth right here total network in down of 8.6 and if we go to the outbound 8.2 gigs here it was certainly able to push most of the bandwidth out that was needed to make this a pretty performance system. Now my goal of this video is just to leave you with this simple script with some tunable parameters so you can have repeatable tests when you're doing tuning. These are the same tools net data and FIO that led me to the discovery of the way single threaded performance works on TrueNAS scale with the atom processor and why I had to switch my stuff over to unencrypted data sets to get the performance I was looking for out of this particular system. I just wanted to share the knowledge of how I look at things and how you can look at them too and then send you down the rabbit hole of changing all the parameters and seeing how it impacts CPU or impact performance or maybe using the same parameters and then changing things on TrueNAS like your sync or not sync or maybe the block size of a data set and see how that affects performance. Let me know in the comments below. Let me know if you love or hate the script. I'm not the best writer of scripts. It's all done in bash and now feel free to bash on me in the comments. If you don't like it, I'll take that criticism as well. Like and subscribe to see more content from this channel. Head over to my forums to see this script and engage with me on this topic on our topic you're seeing on a channel and how to learn systems.com if you would like to connect with me and whatever socials are available that you find there. All right and thanks.