 Alright, this video is about setting up NFS on FreeNAS 11.2. This is the RC version, but in a couple of weeks the full version will come out. Now, I'm going to be using Zen server as the hypervisor, I'm going to be attaching this to for the VMs, but the NFS setup rules still apply whether you're using Zen server or any other hypervisor such as VMWare and using NFS as your backend storage. So let's just get started on this. This is a pretty straightforward setup. This is an older machine, not performance, it's kind of my lab machine that I do some testing with here, because it's no fun breaking production machines. And we look at the storage here, go over to storage pools, go over here to status. I've just set up three hard drives. They're western digital blocks for those who care, nothing great. They're actually older used drives we had laying around. So they're just on a RAID Z1 configuration with three drives. There are performance gains and I talked about that in my previous video about NFS versus iSCSI and performance, and I don't happen to have, but you can use things like an Intel Optane for Zill and obtain some more performance. This is mostly just a demo to show you just how to get it all set up. So first thing we need to do is because this is a freshly created pool, is we're going to add a data set to store the VMs on and we'll call it NFS for Zen server. So we're using it for Zen server, but really you can call it whatever you want. This is just your naming scheme. I used underscores because you don't want to have any spaces there. For Zen server. All right, go ahead and hit save. Nothing really special we have to do here. Go over here and we're going to edit permissions. There's nothing on here, but I still just out of habit do the apply recursively. Then we're going to go over here sharing Unix NFS shares. Then we hit plus. We can just click down on here for the path. NFS for Zen server. All directories. Advanced mode. Security sys. We're not going to get into all the security options. There are some security options for NFS, but what is a good idea way to set this up if you're in a more enterprise environment is your storage land should be bound separately and locked down on a separate secure network. Because of the security issues that can come up with NFS, it shouldn't be on the same network as everything else. It may be fine if you're in your lab and you're not worried about anyone just getting into the file systems, but you can also add a list of authorized networks and authorized IP addresses and lock this down. Now real quick on the networking. So networking interfaces. I happen to have a 10 gig connection on its own separate network. So there's a direct cable between these two devices, between the Zen server hypervisor box and between the box that runs FreeNAS. And this is a 10 gig direct connect that I have between them to facilitate more speed. You can pick these cards up for a pretty reasonable price and just directly connect them without a switch in between. And that's how this is set up for performance reasons. It's just easier to test things. It's a lot faster to get stuff done over 10 gig than it is, of course, over gigabit when we're moving virtual servers around. So that was it for the setup and getting everything going on FreeNAS site. It's actually pretty simple. And then we're going to go over here and double check this. Make sure the service is running under NFS, configure. And if you want, if there's a performance issue, you can add, and this is number of servers, you can add this, but don't exceed the number of cores because you can also have some problems there. We're just going to leave it at 8. This is a 12 core system. And I do have enable NFS v4 on here. That may depend on what you're connecting it to, whether or not it supports NFS v4 and whether or not it supports Kerberos. The default is not enabled, but just checking the box enables it. And the rest of these can stay just fine. Now, one last thing that we do have to do, and that goes back from the command line. And you can do it from the shell right in here. And let's see if I can make this bigger. If we go here and we need to ls-mount tank, we can see the name of NFS for Zen server that we created. And you don't have to be in this directory. I'm just showing what's on there. But the important part with NFS is by default, each data set. And you can set this for the whole pool, but we're going to do this on a per data set level. You want to turn off the syncing, or you'll have very, very poor performance. I covered this in the performance differences between NFS and iSCSI video, that if you don't have this enabled, you get really poor right performance because of the ZFS Zill trying to commit every single right as it happens kind of causes a big bottleneck in IO. And all you have to do is ZFS set sync equals disabled tank. And then it's NFS underscore for Zen server. And I spelled something wrong. I didn't capitalize the S. It is case sensitive. Here we go. Now sync is disabled for that particular data set. And what sync disabled, the short answer is, it doesn't try to commit everything as it writes to the intent log inside of ZFS. This does come at the risk of data loss, not necessarily data corruption. So if there was a sudden power outage and there's data in flight, there may be more uncommitted data because it has not synced it to the commit. And so that is a potential loss. Now what happens when these VMs, they will be, let's say, because there's a five second interval between commits, they can be five seconds older than the hypervisor thinks they are if you were to suddenly lose power to your system while it was doing right. So you could have five seconds of data missing or more kind of depends on some other parameters and the right. So this, you can read more into it and you can look and do some searching on what sync disabled does and more detail. But it's not as much a risk of data corruption. It's just more potential loss of data. And if you get into the enterprise storage market, there are ways to mitigate that by using really fast, such as like a ZIL drives, they make them that are special cash drives for solving these issues. But they also get really expensive at the same time. So in short, this is the way you would probably set this up for a home lab or even a small business office. This is fine. And as always, you can't just rely on hoping that the system never goes down as your backup. Please back everything up. All right. Let's go into how to set it up in Zen server now because this is all you have to do to get it NFS setup. And you can actually stop here if you're going VMWare and start mounting the system. Go over here to Zen Orchestra. We're going to find my demo host I have, storage add storage, storage name, call NFS, NFS on free NAS, choose NFS, 192.16.10.10. Use that. Now we hit the little eyeglass because what that does is finds all the pass available. When you export the pass, a query goes back and it then can query all the available ones because you can have more than one NFS mount. We only have one so it only shows one. Then we hit create and now we've created it. That's it. Really simple. Now let's move a VM over to it. I have a Debbie and demo on local storage and we're just going to head and make a copy. Debbie and demo NFS copy. So you're using us and use compression supposed to optimize one. It's doing the transfer. So hit OK. And we'll see the task kicking off here. Now the local storage and this is a single spinning hard drive. Not very fast. So we'll fast forward through this part. It takes a few minutes to get the data propagated over to that. So even though we do have a 10 gig link that's really fast, the spinning hard drive cannot keep up with the entire speed of the 10 gig link. All right. So now we have a VM copied over Debbie and NFS. And let's talk about what the back end looks like now. Let's log in a free NAS now and see ish. Mount tank. Here's this data store created. Here's the VHD file. Whoops. H. And it's about 11 gigs right now copied over. Now this is where the thin provisioning comes in. So here's Debbie and demo on NFS. We have over here. We're going to the storage pools. We're going to look at the NFS storage. This 16 gig, but it's 11 gig because there's some data in it for this demo. Go over here back to this and let's create some clones. Clone now because of thin provisioning these clone really fast. Let's go back over here. Now this is what's interesting. So we have the base, which is the 11 gig file right here, but then all the other ones 37 kilobits each because there's no changes made to them. So it's just kind of like creating pointers. So we have the base file and then we have all the other ones that we cloned from that. Now this is where thin provisioning can really help you because now we've not taken up much space because so far when you clone these, it doesn't need much more than a reference to where it was and it only has to track the differences. So we're going to go back over here to VMs and we don't care about the local storage one and we're going to go ahead and fire these up. Yep. Start off five VMs. So far nothing's happened. They haven't really had any changes made to them. So even though they're started until there's some changes written, we still see it just as this small amount of data right here. The VMs are all booted up. So now we see some changes. Now they're only taking 43 megs. So even though they're all running VMs, they're not really taking up a ton of space because of the thin provisioning that Zensor provides over NFS. So it's kind of neat to see this. And the last thing we're going to do is write some changes and kind of show how that works with here because one of the questions of course is how does it do? What if all these VMs are using a lot of disk activity at once? That's a good question. So the first thing we're going to do is log into one of the VMs and kind of just do a quick write test. So we'll grab 103. So a quick speed test shows me I'm getting about 399 megs here, about 400 megs a second on here out of these drives. I know it's not the most accurate. Alls I did so you know what it was here. It's time shdd and we create a big file real quick and then delete it. So it's just kind of an idea of creating about a two gig file and then deleting it to a gig file after it syncs the system. Not the most in-depth speed test. That's not really what this is about. But what I wanted to show you happens next. I'm going to go back over here. So here's all these 44, 43, 43, 11 gigs and then 45. Because we wrote a 1.9 gig file to it and deleted it, it now expanded and it doesn't contract necessarily back because it's still referencing off of that first one and now it's gotten bigger. Just wanted to give you some ideas of what's going on behind the scenes there. And one more question that you may want answered is what would happen if we powered off the machine? And we're going to show you that in a second, but we're going to first do something. We're going to build a bunch of writes on this. So we're going to build a quick Ansible script for this. So here's all the IP addresses of each of these virtual machines. 103, 102, 100, 101 and 110. So I created a quick inventory file. I was using it for something else so I'm leaving this pointer in here. But here's my NFS machines, 103, 102, 100, 101 and 110. Just say inventory is going to be load test, server or NFS, dash A. And let's just do something like uptime to make sure they're all responding properly. Cool. I see all of them are up and running. So let's go ahead and run that test again. But this time we're going to go root slash speed. Whoops. And what I'm telling it to do is simultaneously using Ansible, toiling all those machines to run that same write out the speed test on all of them at once. So they're all doing it. And what we're going to see here is all of them, if this is working properly. Hey, they're all getting bigger because they're all writing at the same time. And you can kind of get the idea. Now, my other reason for doing this test is when you start spreading it across here, instead of one machine getting all the speed, NFS will automatically balance this between all of them. So they all have roughly the same speed because they all started roughly at the same time. So this allows us to kind of see how NFS handles this. And I also, let's run this speed test repetitively. I just have a script that just kicks these off over and over again and see what happens here. And we can see it's just Kram of the CPU right here. Doing all these writes. This was the test we just ran once. And now this test is just running. That little script just runs it in a loop over and over again. Also, I have an Ansible video for those of you that want to learn more about Ansible. I'm not an expert on it, but I did bring an expert and we made a tutorial and a demo of how to get Ansible set up and going. But it kind of gets you the idea here that it has no problem handling multiple workloads, multiple VMs at full right. And it goes ahead and shuffles between them. It's up to the hypervisor to balance the amount of power because he shared over NFS as how it's done right now. And the last thing I wanted to show you guys is what happens while all this writing is going on and you decide to unplug the system. Because obviously the other question is, will this corrupt all the VMs? Will this break everything if this happens? So the system's up and running. It's doing heavy IO, heavy writes. So we can see that it's using a decent amount of CPU here. So CPU is going pretty good. Quite a bit of percentage. And let's see if we unplug this. Here's the little test rig set up. The system running. Here, just so you know, there's a fiber cable that connects these two. This is for the 10 gig connection. And we're just going to kill the power. And then we will power it back on. And that shut off everything. The server that's running this, the free NAS, we just catastrophic failure. And everything's starting to boot back up and come back on. And I'm going to pause this here because these things take about, I don't know, seven, eight minutes to get it all booted. All right, so we unloaded the system or unplugged the system while it was under full load. So we have to log back in everything. And one of my points I was making is because sync was disabled. And that means we could have lost data. And we were writing to all the VMs on the NFS simultaneously at their maximum speed shared between all of them. We'll go ahead and refresh the page here. All right, and all these are shut down. And we're going to go over here to host storage. And I see the NFS is disconnected. And that's actually because the Dell server that's running XCPAG boots up faster than the older free NAS server. So let's go ahead and click connect. So we have all those running. And we're going to go over here to the VMs. We'll just take the first one, for example. Look at the console and see what happens. And we boot it because, well, how much data do we lose? What's it going to do? And honestly, really nothing. It may want to run and probably should run now a file check on itself. So it'll do a check disk. Or FS, I say check disk. I, FSCK, it's Linux habits. Yep. And I seen it really quickly, it boots up fast, but it pretty quickly went through there and it said that's what it was doing. And it's back up and running. Back to VMs, none. We'll go ahead and start the other ones here. Fire them all back up. Okay, these are all started now. Let's go here, ls slash. Well, let's go into the directory. And they're all 4.2 gigs because of all the reading and writing in there. But we still, all the data's fine, even with catastrophic data. Now, I'm not going to say that every time you rip the power cord out, it will absolutely come back flawlessly every time. I will say that we have tested this many times and we've obviously seen this happen for clients who don't have proper backups or who do have proper battery backups that just have decided to go catastrophically wrong. That has occurred and it still has recovered. And this is both, we've got people running with VMware. We've got people running with Zen server. And we've seen the excellent results overall. Like I said, results may vary, always have good solid systems and really good backups as frequently as possible. But it is not the end of the world for losing power, generally speaking. NFS, or I'm sorry, ZFS with NFS, even with sync disabled, is still a very resilient fault tolerant system. I've done many demos on ZFS and it's the reason it's very popular in the enterprise storage market. And a lot of companies besides Freenance use this and work on this project for the ZFS because it's such a robust file system. But you do have to disable sync if you want any performance or get a really fast RAM-based caching system for it. So this is it for the video. It's just setting it up and kind of showing you some little bit of torture testing of worries about file corruption and things like that. And hopefully this was helpful and thanks. Oh, and if I did something terribly wrong or if you have some ideas or optimized settings, let me know and we can always make a part two of this video because I certainly don't know everything that there is to know and there's always someone smarter to me out there who may have a suggestion on this of optimizing it and making it that much better. Thanks. Go ahead and hit LawrenceSystems.com and you can reach out to us for all the projects that we can do and help you. We work with a lot of small businesses, IT companies, even some large companies and you can farm different work out to us or just hire us as a consultant to help design your network. Also, if you want to help the channel in other ways, we have a Patreon. We have affiliate links. You'll find them in the description and you'll also find recommendations to other affiliate links and things you can sign up for on LawrenceSystems.com. Once again, thanks for watching and I'll see you in the next video.