 Tom here from Lawrence Systems. We're going to talk about FreeNAS, ZFS pools, RAID-Z, Z2, Z3, capacity, integrity performance, and of course the subject of much debate and many sleepless nights for people designing storage servers. If you want to learn more about me and my company, head over to LawrenceSystems.com. If you want to hire us for a project, there's a hires button on the top there. If you'd like to help this channel out in other ways, there are affiliate links below that do help us out and get you offers and deals on products and services we talk about on this channel. And of course, there's the forums where I'm reading this from. And that's a great way to participate and communicate with me for video suggestions or just for the discussion on a topic. Most of my videos are cross posted within these forums. And this particular post is something I've been putting together, not because I have the answer for you. And I want to talk about that. There is not a one size fits all when it comes to laying these out. A lot of people start with this simple question. And when you first get into storage, you're like, oh, how should I lay out these eight disc, ten disc, four disc, whatever that number is when you're building out your storage pool? And this is where all the balance has to be met. Unless your budget's unlimited, you may not get what you want. Unlimited budget will get you quite a bit. And there's still considerations even when it's unlimited budget because of the laws of physics and the advancements of technology. Anyways, so when you're setting up your ZFS pool, it's basically pick two, but you can't pick three. You're going to pick capacity, performance or integrity. And where you want to land is going to vary with that setup and that design of these ZFS pools. I'm going to talk about them on Freenast specifically, but this applies to ZFS pools in general. ZFS I'm a huge fan of. It is an incredible system and it is highly, highly fault tolerant provided it's set up properly. It has performance and fault tolerance in its design and it's used at scale very much commercially. Because a few people have asked, is this even something that's used in the enterprise? Absolutely. Enterprise companies don't always reveal what they're using, but I will at least preface this with, yes, we have even consulted with very large companies that do not want us to drop their names, although it'd be kind of cool to say if I could. Yes, this is used in the very commercial markets. All right. Now, what you have to think about here and we'll do a little bit of background. So how is a ZFS pool formed? In the basic sense, if you're setting up a simple system, yeah, you're probably going to have one VDev. But once you get into larger more enterprise systems, you're going to have multiple VDevs and that's going to affect your performance. So once you're creating these, the basics of the performance in ZFS drives are logically grouped together into one or more VDev. Each VDev can combine physical drives across a number of RAIDZ configurations. If you have multiple VDevs, the pool is then striped across the VDevs. So you want to make sure there's fault tolerance in your VDevs because if we lose this VDev here and we didn't have proper fault tolerance in it, the pool falls apart. Now, the VDevs are the storage parts of a ZFS pool. They are separate, again, and I'll leave a link to this so you can read about this and we're not going to dive into it, but more reading, of course. And that's what this whole point is. It gives you all the tools and links to dive into this and learn more on this topic. But the ZFL, ZIL, and SLOG demystified is a great article over here at IAC Systems. And a few of these are at IAC Systems. They spend a lot of time developing the ZFS and working with it to not just develop freelance, but they do contribute to the entirety of the open source project that is ZFS. And of course, because they are doing all the support for it, they know a lot about it because they do that enterprise-level support, especially when you're designing these storage servers. So get back on topic. There are other pieces that you attach to pools such as the cash drives and the zero intent drive. So you can also enhance and boost performance. But the storage, the data lives within the VDEVs. The cash and ZIL are something separate. And there are ways to buffer the commits or cash out that frequently use files and then quickly get them to you. And ZFS goes a step further and also will use the memory to create large amounts of cash. And I'll actually show that real quick here. This is not doing much. So we actually have the ZFS cash is 28 gigs of caching going on right now that will help serve up files faster. In this particular machine we're going to talk about is my video server. So we'll get to that in a second. Six metrics for measuring ZFS performance is one of the first articles I linked in there. And it's really a balance like it shows in the very beginning. Read IOPS per second. Write IOPS streaming read streaming write speed. Now IOPS is tricky. So which when you want a lot of IOPS and let's say a database application be an example and we'll go more real world. Let's talk about a WordPress website. Let's say we need a WordPress website that has a lot of reads because there's people pulling all these, you know, articles across there. So we want those articles to be served up quickly. It's a lot of database read. We need this high read IOPS because all these little files have to be served the graphics and everything else. And we need them to get there fast. We need these queries against this database to be fast because WordPress backend is all database driven. So maybe someone's querying and searching through old archived articles. Write IOPS. Well, do you have articles being posted to it very often? No. Okay. So maybe you sacrifice a little bit the right IOPS because the integrity of the site means more to you. The uptime and not losing drives that serve this site are real important. So you sacrifice some of the right IOPS because you added some integrity and more capacity to the system in terms of how many drives you can lose. So there's, you know, a factor in there. What about streaming read? Well, same thing. We want to be able to stream really fast because maybe we've embedded some videos in that website. So now we have like a clear, well, maybe somewhat concise use case on there. What if you swap that over and say, what about a forum like the discourse forums I run? Well, now we need read IOPS and write IOPS because we don't want it to pause when people to put articles on there. And I have to wait for the article to get published or time out while trying to publish it because there's so many read IOPS. And then streaming speed. Well, maybe that's less than important, you know, because we wanted to stream fast enough on there. But, you know, they're mostly just reading static content regenerated without videos embedded. It's just whatever a screenshot someone like Tom posted in his forums. So these are things that you have to consider. And of course, it goes further out. If you start talking about running virtual machines on here, does that virtual machine have to read a lot, write a lot? What should the cash be on there? And maybe the cash drives would help. Maybe the cash drives won't help because well, especially with the VM, you're not serving up the same files all the time because it's very dynamic. Because of the way it saves it in the ice because he format. And now you can see why I'm going to stop talking and leave you links to lots of these articles because these are the decisions are why I can't just answer the question. How do I set up my drives? Really, you have to think about what you're going to use. And they're break down a lot of great details with all the graphics. How does it goes across one way to a mirror, six by two way mirror, et cetera, et cetera. Then they have a part to dive in a little bit more performance and some of the topics on this. And then we have this site here, which leaves some excellent examples. Now you can extrapolate it out because they started with one drive, give you the performance of one drive. This is obviously a little bit older of a drive than they break it down to this four terabyte drive. I know four terabytes getting to be small here in January 2020. You're probably building this with bigger drives or maybe you have a bunch of four terabyte drives. But you can see the performance differences by just changing things around from a stripe to a mirror to breaking them down into two X-Trait mirrors, breaking these down into different V-Dubs and seeing the performance differences you get. Of course, the capacity sacrifice. So this is a real way to visualize that. So as you gain fault tolerance, you lose capacity. But you do get certain read performance benefits, but then your read write performance goes down to here. And this is the rate performance on a RAID Z3 versus the RAID Z2 of the same drive. So anything you did was change the RAID Z type. And you can see the differences that created in read and write performance and, of course, capacity loss. So there's a lot to think about in there. And of course, this white paper layout right from my system is a whole white paper on storage and servers driven by open source, by systems, back to the whole pieces and all the examples of real world workloads. Another great article. And all this is linked to my forums so you guys can dive deep into it. And of course, this is a kind of interesting article written by Matthew. The popular open ZF has spawned a great community of user system and architect developers contributing wealth of advice, tips and tricks and rules of thumb on how to configure ZFS and talking about setting up like wide ZFS. And there's some, you know, simplicity you can have to having a wide ZFS pool. But of course, that stresses the pool differently depending on how many wide it is. And a lot of this, the reason for me posting this video, one, I get a lot of questions of people asking about it to get your work on a review of TrueNAS. So we look at my system here and we look at how my disks are laid out. This is the server that mostly stores my videos and is backed up completely to another system. So I just built a really simple RAID Z pool with these drives right here. So I took this RAID Z2 offers enough fault tolerance for me and my videos because I know it gets backed up every night. So, you know, in theory, if I lost a drive, I could re-silver it. And okay, while that re-silver process does task the system, RAID Z2 gives me enough fault tolerance that I could possibly survive another drive being lost while I'm doing it. And hopefully I wouldn't lose a whole day of videos that I stuck on here. So I chose RAID Z2. This layout works really well. The other thing stored on here, a handful of VMs for my lab server. Once again, they're not anything that's using high apps. If you watch any of my lab videos, I'm doing a bunch of free NAS talks. I'm doing a bunch of talks on PF Sense. And those type of videos are what's stored here. And of course, those VMs are stored here. Those VMs that are stored on here. Once I boot them up for the demo with a couple of Linux servers that I have on here, they're not doing anything that requires a ton of rewrite intensity. So RAID Z2 with nothing else added to the pool works perfectly fine. When you talk about enterprise usage, you talk about, you know, real intense systems that are going to be pushed to the limit. We talk about true NAS being an awesome commercial option. That's why I'm going to be doing a video on it. But how would you lay out a large true NAS server? And I've already done the basics going over the hardware. Well, this is a series of RAID Z2, each one of these. So there's five drives in each one of these RAID Z2. And like we said at the very beginning, for each VDev that makes the pool, it stripes the data across. So we get this amazing write performance on here. And the fault tolerance is Z2 limited to each grouping of five drives. Then outside of there, this is that real world example. Here's the log, the cash drives, and a spare ready to be thrown in the mix. If any one of these fails. So under this circumstance, this is more of a higher performance layout where we have the fault tolerance for RAID Z2. We group it down to these drives. So any one of these VDes if they lose a drive, and then rebuild that particular drive. And then we have the zero intention log here. And then the cash, cash for serving up the repetitive files. And the ZIL log, because you may want synchronous writes for really good integrity when you're setting up with like NFS and iSCSI for serving up VMs. And I'll dive into the demo of this server soon. But I wanted to bring this up because the question comes up so often about it. And it's why I can't say, or you see the first comment of, it depends when someone says in a forum post, you know, how should I lay out my drives? Well, it depends. And this is all the things it depends on. So these posts, these I've read through all of these, maybe even more than once and sometimes again, when I'm thinking about it, when I'm laying something out because I want to be right when I do it. But they're great reads. They're a great way to get a better grasp and more in depth on this particular topic. And hopefully they under get you a better understanding of why people start with it. It depends and why you can't just lay them out because it kind of depends on your use case for it. So I'll leave a link to this forum post which goes to the video, which the video links to the forum post and that cycle continues. And of course, all these links are right here in the forum post so you can really dive deep into this. Feel free to contribute to this forum post if you want. If you have other sites that maybe I missed that you think give a great explanation of this or that you just found really, really helpful on there. Or if you have questions and discussion and you want to keep this topic going, please, this is a great place to post it. And there are different methodologies to doing it. I'm not saying that my layout is exactly right. This is something I think is important for technicians to think about including myself. I'm always open to other ideas. Like I said, I went really simple with this one. This one will be a little more what seems like more complicated but can provide better performance. I'll be doing that when I dive into the video. But like I said, this is still a great discussion about how all this needs to be done and there's a lot that goes into it and that's what I wanted to make clear. Thanks. And thank you for making it to the end of the video. If you liked this video, please give it a thumbs up. If you'd like to see more content from the channel, hit the subscribe button and hit the bell icon if you'd like YouTube to notify you when new videos come out. If you'd like to hire us, head over to laurancesystems.com fill out our contact page and let us know what we can help you with and what projects you'd like us to work together on. If you want to carry on the discussion, head over to forums.laurancesystems.com where we can carry on the discussion about this video, other videos, or other tech topics in general. Even suggestions for new videos, can be accepted right there on our forums, which are free. Also, if you'd like to help the channel in other ways, head over to our affiliate page. We have a lot of great tech offers for you and once again, thanks for watching and see you next time.