 All right, so we're going to cover a topic here that seems to be a lot of debate and it related to storage planning So we have a free NAS server here 11 to RC one running an older AMD opt-a-ron 6172 12 core processor now We're going to do I SCSI versus NFS and I know there's better machines out there. There's enterprise machines Tom did you test on this machine? I wish I had time maybe if someone throws enough money at me I will test it on more machines or if they throw machines at me and money So I have all of it together more than happy to do it my important part about this test is for consistency We're going to be testing it on exactly this system So the NFS on this system and the ice guys the end of system So we have at least based on the same pieces of hardware and it should scale if you have better hardware the performance results should Scale upwards for both devices We're both different formats I mean of NFS or ice-cazi for how you want to present to the Storage to a hypervisor and you should be able to kind of extrapolate some results a couple things We want to cover here is all the details. This has 20 gigs of ram in it. It's running the release candidate, which I found very stable I know the new build comes out in only a few weeks, but I decided to run it with this one I've tested it with the 11 dot one that we have and my results overall are the same generally speaking ice-cazi Faster, but we'll get into the details shortly here So let's talk about first. How are the discs set up because that's an important aspect of this NFS benefits greatly from having a zill drive. So we go over here and We have the scuzzy and the NFS discusses a Z valve block storage data sets those standard files We're gonna look at the status. It's just a raid Z with three Western digital black one terabytes That's what I had handy. We had three of them. Actually had four one was bad These are some used drives, but they all tested good except for the one that's not in here no more Then we have our zill here, which is an SSD. No, it's not an intel optane or anything amazing. It's just a Standard sand disc SSD I had laying around. So that's on there for the zill. So if you're not familiar with how zill works There's a great write-up That I'll leave a link to on the free NAS page Below that has an explanation of how zill works with CFS. It's really interesting. It's not exactly a right cash Like people would call it. It's the intention log cash and it does help a lot, but she was the FS We're gonna show some of that in the testing So now that you kind of get an idea there, we're gonna look at the services We are running now. I have it set up with NFS. There was 12 cores to this machine So we have 12 cores 12 servers set up with NFS What this is is it's allowed to spawn 12 of them. We're using NSF v4 is terms as the mount on here I didn't really see any test speed differences or apparently there's some minor deviations between NS NFS v4 and NFS v3 Back to services for the ice guzzy. It's pretty much default all the way across. There's no passwords on it It's just basically set up pretty simple one extent The only checkbox out of default would be the fact that this is a zen initiator compatibility mode has been checked other than that Everything else is just the defaults on this and let's talk about network connectivity and interfaces. There is a 10 gig Fiber connect between these so this is statically set to 192.168.10 with no gateway because this is set up to be a direct connect between the server running zen The latest version and the free NAS So these are 10 gigabit in between so we don't have IO bottlenecks in terms of networking when we're presenting this So let's go back over here to look at the Zen server We have Debbie and on scuzzy and Debbie and NFS and FS Look at the host storage. I just kept them really simple This one's called scuzzy. This one's called NFS ones mounted NFS ones mounted VI scuzzies They were really straightforward here and they're like said, they're mounted at 10 gigabit between them This is the storage network 192.168.10.5 just so you kind of get an idea. We are using jumbo Frames which is setting the frame ring at 9,000. This is a direct connect with no switch in between It doesn't even have any problems So I've done everything I can to optimize the performance between my head-end unit of running XC PNG server and the free NAS box and Try to limit any type of IO bottlenecks. So they're on equal platform Both of them are mapped over this both the NFS mount and the ice because he mount our mount across this 10 gig link So that is an important distinction Just so we're all on the same page on these And lastly, I'm using the foronix test suite the open source to it test suite for benchmarking So this is what we'll be using to benchmark this so all the results are completely reproducible by you This is a free download if you wanted to use it All right, and then we have here net data which is built into Free NAS since 11.1. It's a great visual way and it's really pretty way to look at all the results kind of in real time So we're looking at some of the Services running on this and what it looks like from a performance standpoint And we'll also be looking at some of the ZFS back end on here. All right. Let's get started on the test things That's where the fun part is. All right. I have this one called scuzzy on Debian NFS Debian I made the naming as clear as possible. So test on this one, of course Sure on the scuzzy one that's on the other one or any NFS one Other thing to note because someone always asks, hey, Tom, how'd you get your shell to look like that? There'll be a link below to my github where you can get the shell to look just like this at the parameters I have put in here for the shell. They're free. Okay. Let's go ahead and run the test So it's the foronix test suite benchmark and I just fell a bunch mark properly. There we go We'll go ahead and test all options for so we're going to test 4 kilobyte 64 kilobyte and 1 meg We'll test with a 512 megabyte test file. If I had time I'd do all of them If there's interest, maybe I'll run more tests later that are more extensive And then we're going to choose three test all options We'll save the results because I'll leave links so you can view these results and Stare at them in detail if that pleases you and We're going to give it this name here as you can tell I've been doing some of the tests I want I use test things before I do the videos to make sure that they're we get the idea of how it's going So we're going to go ahead and do this as the test This is just I scuzzy all the defaults. So I guess I could put defaults So no optimization in here We'll leave that as the same and we'll go ahead and let this run a While it's running a Couple things we'll pull up is we'll look at how the rights are actually being written to the drive So I just for those you didn't see Z pool IO stat dash minus v1 was in refresh every one second And what's this show you real quick is? The data being written if there is any to the log file Which there's none because it's doing the read test if I had to guess right now. Yep still running the read test So there's not going to be a lot written to the zil because it's not a right test We're going to see this flip around when it comes to writing You'll see a lot more in there Now while this is running over here, so let's move it over here Let it do its magic and we can see right away. Here's the CPU usage So you can see this bikes in CPU usage when we're doing this Now we've jumped up to the right performance testing So the same thing kind of peaks between each one of them While it writes out the files Now this is also where we go over here There's still not too much going on in the zil and it's part of the way and I don't have The best not working knowledge of exactly how the protocol is written But it can it's a way it synchronizes the rights with ice guzzy. It doesn't seem to be a zil intensive So when we do these tests, we're going to do the test because NFS is a zil intensive on a ZFS file system we'll do the testing on that and It does make a performance difference But we're not going to bother testing the ice guzzy again with and without the zil because it really doesn't seem to make much of a Difference on there at least from the testing I've done I'm also always open to doing these tests over again if someone tells me I did something wrong or has a Twist on it that I completely missed or some optimization setting. So please let me know in the comments below I do read all the comments So I can learn more and we can all learn more together because I like doing follow-up videos on this Okay It is done. Let's go ahead and save the testing results. So Yes Yes, upload it all you can have all the logs and everything on there and it's this link You don't have to try and copy it. I will paste it below as well So this is where the results page will end up. So from a reading standpoint Until the files it start exhausting the cash from a read standpoint. You get some really crazy high reads that are Really really fast does she with the small file sizes and then the rights are a little bit more sane When you do the right performance testing 512 so about 409 here. So okay, this is the base test for this one here Let's go ahead and move over to the nfs world So we ran this right here. So That's the test that was run copy paste We're going to do four just like we did last time test all options one five twelve. So and then three test all options Yes We give the same results file name nfs default options All right It's going to take a little longer to run and you're going to notice right away That's because it just doesn't have the read write performance. So let's look at what it's doing though So we look over here while these tests are running. So we see our cpu is peeking up here in the 26 20 29 percent of cpu 13 percent of cpu. So it's just not hitting the cpu is hard We'll go over here at the zfs oops zfs system Here was those tests run over here. So zfs reads versus Just not peeking as much when you're over here You're not getting the thorough put that you were getting with there I try to say that word right someone said I said it wrong. I don't want to say things wrong, but that does happen thorough put through through put through put I think is As you can tell I read if you didn't know I read more than I talk although I do talk a lot in videos Zfs IOPS so We seem to actually have more IOPS here. All right, let's look at the results So we have really good so far read results. We're up here at 5000 5950 which Obviously we'll compare more but we're we're looking up here at 6047 and 3622 so it's still a lot higher but not substantially But when we get to the right test, that's where you're really going to see the differences because of the way the synchronization is From reading and writing with nfs being a lot different. We're going to show the optimizations next on how to change that So let's go back up here to the top So here's with the read tests now. We're into the right testing and we're using even less cpu now What's happening here too? Let's look over here now Please note the log file Which was barely in use before is really being used quite a bit. So we constantly have all this data flushing in and out of the log The zil system because that's nfs is very dependent on that. That's why we have this in here And trust me when we remove the zil the performance just goes down to Nothing when you're doing it, especially when you're doing right testing. It just it goes away So the zil is an important aspect of this So let's see how the tests are going almost done Oh, so I thought almost done. It's got a few more to go. It does take that much longer Um Compared to that. So the the IO performance you can already probably surmise where we're going with this Um, I'm going to fast forward for a second here, but you can tell that we're just not seeing the same IO performance All right, and our testing is complete and it's sad 37 megs So yes, we're gonna Save the results All right, and let's look at them real quick here copy link nfs default options. So Yeah, they're bad So we're sitting here looking at 37 versus let's scroll down the bottom of this one here 400 megs 37 megs Okay, so obviously these are Uh, really bad And let's talk about why they're really bad and what can be done about this So nfs because of the syncing issue on rights Zfs tries to sync every single right. So how do you get around that? Well, the solution is Not amazing, but it is kind of a workaround. So let's talk about what that is So we're going to go here and close this And we're going to go ahead and set So it is we're going to set these syncing to be off now Corruption and missing data are two different things. So you may see people say don't set syncing off because you have the potential to uh lose data, but Zfs is a copy on right file system. So that means It never puts another piece of data on there until their copy is made So you can't end up with corrupted data, but you will lose data in flight So data loss is definitely a potential with this So by not committing the syncs in the same way and you had a power loss You could potentially lose some of that data in flight more so than if this syncing was enabled So right now syncing is enabled and we're going to go ahead and set it to off. It's pretty easy to do Zfs Set sync equals Disabled tank slash the NF Oops had it right first time the NFS That turns off the syncing now. You don't need to redo anything Matter of fact, the VMs are still running. You can do this while the VMs are running And we're going to run the same exact test again With syncing turned off. So first one's run on default. Now we'll run the same test again, but syncing is turned off. So it's going to go Way faster So option 4 1 3 yes NFS with Z fs sync off Uh disabled is actually correct term. So we'll call it disabled build Everything else can stay the same and we'll just go ahead and run the test again now Go here and look at the iostat And we don't see it. Well, we got to get to the right test and we'll see if it hammers the zilport partition as much That went fast We're still in the read performance Because the syncing is a right problem not a read problem all right So we're not committing all these in the same interval. So we're not seeing as much High usage we did before of the log Let the benchmark keep running benchmark already running way faster And we're seeing higher cpu usage because of higher throughput of it. So we do see right away some results And did it finish running? Oh, it's still running a couple more Still running some more tests But they're going so much faster already you can already start seeing these high results on this So we're on to the last test now All right, testing's done. We'll go ahead and save the results. Yes. Yes. Yes Copy link And here's the results nfs default options nfs with zfs sync disabled So when it comes to the read performance we're here not a big difference Right performance. It's like two completely different systems here So we're getting up here about 360 375 read performance again In final right performance on here So Dramatically alls we did was nfs was synced disabled to be able to do that Now let's go over here and compare it to we're still not seeing The same performance we're at 400 here 400 on their rights 414 versus the best rate we got was 384 here Yeah, 375 361 the right performance just isn't the same between us. It's really close So we are we're approaching close on here. Now the last test i'm going to do Relate this is still on the nfs side. Like I said, we've just left ice because he had all the defaults and not seen a big variation So we're going to go over here to our free nas box Go over to the pools storage pools status and one of the magical things with zfs is Let's go ahead and make the zil go away and by the way, you can do this without rebooting the vm So i'm leaving him running and without rebooting the servers or anything So if we want to add or remove a log or a cache you can just go here and remove Or replace it if one's going bad It's part of the beauty of systems. You don't have to take them down because it's taking systems down is That's not fun All right, we no longer have a log. So if we go over here Z pool iostat, there's just the tank raid z1. There's no more log. I'm on there. I'll go and leave it running The vm is still running. So we're going to just run this test again The only thing we have we still have sync disabled, but we're also no zil drive on this right now So four one three Yes, this is nfs sync And no zil. All right, and we'll let this test run now And see what the differences are without the ssd zil drive in the mix Okay test completed Yes, yes, yes And not having a zil not much of difference. I see here about the right a reading performance But writing performance puzzling me Because everything I read says definitely have a zil Went up a little bit. So I found that actually kind of puzzling here. So nfs no zil So we've seen a little bit of a performance bump in there. So we're actually now Just about on par With our ice cozy implementation. So it's really close here. So now that we have the ice cozy Maybe this is a problem with my zil drive that i'm using This is all in the same array. So now we're going to run The ice cozy one again, we're not changing the sync option. The only thing different between us is that we remove the zil drive from it So this is the final test one we're going to do here So we're going to hand and do a test all options One three Yes No zil drive just to see if maybe there's something up on my system See if it makes any difference to the ice cozy at the zil is missing All right results are done. Let's save them And see the comparison So reads the same writes the same ice cozy seems not to care if there's a zil we have a Actually the slightest drop but i'm going to call that enough with some deviation because we're seeing if I run these results A couple percent deviation and it's noted in the benchmark that sometimes it does that But we do see an interesting result difference with the zil there or not there when it comes to The testing with nfs. So I find that kind of interesting We do seem to get a little bit performance This goes against my understanding of having a zil on there But this is done with Zfs having the syncing disabled because nfs and zfs fight with each other when it comes to sync syncing Now like I said, there is some data risks that come with this as in terms of you lose data in flight That's uncommitted because it's asynchronous. That's a deciding factor up to you It's not the same as data corruption, but it is a result that you'll will have to contend with if there's a problem And I don't have an enterprise zil to test If that would make a difference, you know intel obtains and some of the other Zeus cache and a few other companies make devices that are designed exclusively for this for really high end But this has been my results for nfs versus ice cozy. I'm still leaning towards and this is why we provision ice cozy. It performs consistently better I do know that the thin provisioning thing is a big deal to some people and I Can very briefly touch on that because that'll be the final little thing because that's what the other deciding factors But you can't thin provision It's interesting because And we'll show this here. Let me pull up the zfs information So if we go here and just so you know the command zfs get all For the tank the nfs and we look through here Used available for a compression ratio 4.5x So I'm showing this because these are the same VMs Copied on here. So we see that zfs is able to very much compress this because well, there's a lot of just dead space inside of here so you have first thin provisioning handled by the hypervisor and then you also have The compression on zfs so you can get some storage savings, which of course can translate into performance and storage savings So used by data set is only 2.32 gigs Now Zen server does not support thin provisioning over ice cozy So we're going to get all tank and scuzzy and we see something different We see the same high compression ratio that we had on the other ones. So we're definitely compressing a lot of it And the same thing it's you're not using as much data on here. So when you look at the Compressed used by data set factor. So we'll do this one right here. As a matter of fact, let's just do this We are seeing that scuzzy uses just a little bit more versus here About the way of provisions inside of the data set now you can remember z The way zfs works because it's providing the ice cozy targets as a zval block storage versus a nfs which is file level storage So it's going to have a different understanding because you're looking at files. So if we do an ls slash mount tanks nfs whoops We can see the vhd file here And what it thinks is provisioned is 11 gigs even though we've seen it's only using 2.32 because of the compression and what's actually in there. So it gets kind of interesting Now the last thing we'll cover on this directly. It is let's duplicate a few things So let's go ahead and We'll stop each of these. So stop Stop We're stopping them so we can fast clone them So we'll go ahead and fast clone. Let's make one two So now we should have a couple of them. So we've got a couple of these And let's go ahead and uh fire these up Whoops So they've actually done something Yep, start these three vms and we're going to go over here and we'll clone a couple of these All right. So now we're spinning up all these One tiers so they're all booting and doing something And when we look at the storage, here's those drives not thin provision. So there's no Compression anything on there and we're going to go ahead and look at the storage Look at the nfs ones And because it's thin provisioning with a base copy and then showing this here So it's a little bit works a bit different But when you look at the disc here's all the different discs, but they're all forked off each other But let's talk about what happens behind the scenes inside of zfs while these are all running So here's all of them. There's one two three here Well, you know, we need to clone one more. So let's go ahead and stop and start and clone There we go. Now we have the equal number of them all the way around. I'll boot it up and running Look at the storage. Okay, everything's up and running. Look at the stats. Yep We pinned the cpu spinning up all those vms And it seems to clone the same when you're doing this. I Maybe slightly faster on nfs. I'm not sure, but let's take a look at what's going on behind the scenes now Here's all those vhd's And it's only tracking the differentials between them. So whenever you do either a clone or create a snapshot Um, you have your base, which is this one here and then we have all these Individual ones here, which is only the differential between the other ones. Hence the thin provisioning So they're not taken up even though each of these hard drives are potentially 16 gig They're not really taking up much space on the actual hard drive So Get by use by data sets. We still we're up here. We were at 2.32. We're barely using anymore as we got what 50 100 200 more megs of spinning those up But that's also the same here used by data set 3.09 gigs So if I spun them all up and it's not then provisioning, but it's still only using all of that Well, that's comes back to this all Oops And it's the compression ratios So we keep getting more and more compression going because it's really just seeing a bunch of duplications So you're still getting a lot of efficiencies because zfs looks at it and goes this is all a bunch of duplicates but zen server Because it doesn't support thin provisioning isn't seeing it that way. So it's going to report differently So just take for this knowledge what you will I just wanted to show how that works So we are going to run out of space quickly here because we have You know the base copies and each one not thin provisioned all the way out here But when zfs handles it on the back end we get these really high compression ratios And so it still works. You still have a lot of efficiencies And uh, yeah, it's kind of novel. I just wanted to point that out for those of you that say But I need the thin provision maybe you do and maybe that's a reason you absolutely want to do this with nfs So it's our compression ratio here We actually have Um worse compression now. So I think let me try looking at the right numbers here Okay, let's just get the compression ratio and compare them that one's at that and then Scuzzy is at five two x interestingly is compressing scuzzy more So, um, I don't know why I mean like I said, these are the same vm We duplicated the same number of vm's and we are some reason seeing there So these are some of the underlying things I don't have a ton of answers, but I'm sure someone a lot smarter me maybe has a link to a great article that can Tell me some of the other things I may have missed or if there's other things That I did completely wrong, but I will leave a link to the test results Just so they're all laid out here for you and how the system is set up And like I said, I covered all that But if there's something I missed that will be a part to this video If there's something blatant that I just set up wrong And you're like and you find a reason an nfs should perform substantially better than Ice-Cuzzy provisioning because I didn't do a setting. Let me know and I will make a follow-up video to this Thanks, and hopefully this was insightful. I know I did some learning today Thanks for watching if you liked this video Go ahead and click the thumbs up leave us some feedback below to let us know any details What you like and didn't like as well because we love hearing a feedback or if you just want to say Thanks, leave a comment if you wanted to be notified of new videos as they come out Go ahead and hit the subscribe and the bell icon that lets youtube know that you're interested in notifications Hopefully they send them as we've learned with youtube Anyways, if you want to contract us for consulting services You go ahead and hit launch systems.com and you can reach out to us for all the projects that we can do and help You we work with a lot of small businesses it companies even some large companies And you can farm different workout to us or just hire us as a consultant to help design your network Also, if you want to help the channel in other ways, we have a patreon. We have affiliate links You'll find them in the description You'll also find recommendations to other affiliate links and things you can sign up for on laurance systems.com Once again, thanks for watching and I'll see you in the next video