 Tom here from Lauren systems, and we're going to talk about NFS versus ice cozy I just did a video on benchmarking because I like to show the methodologies and tools I use to come to this conclusion But benchmarking is only part of the story. It's not just about the speed test It's about things like thin or thick provisioning and that is a part of the equation You need to determine whether or not you should choose ice fuzzy or NFS other part of the equation I found interesting was using Synology now I used this with Synology and I did the benchmarks on TrueNAS It is not a comparison of Synology versus TrueNAS because the machines are not equally specced to say which one of these two Machines in a showdown is faster But it is to take the same machine and run the tests on ice fuzzy and NFS on the same Synology Run the tests of ice fuzzy and NFS on the TrueNAS What was interesting to me and of course all the results are linked down below if you didn't feel like watching a whole video here But the results are also a little bit weird because the NFS was always slower on Synology But not always slower on TrueNAS This is one of those weird factors that is going to come down to the design of the storage server and who Manufactured and put it all together because the product does have some effect So it's not necessarily a protocol speed issue it could also be the system that is handling the protocol and I wanted to touch on some of those topics But also talk about the results and how we come to these inclusions Now before we dive into these details if you'd like to learn more about me and my company head over to LawrenceSystems.com If you'd like to hire short project There's a hires button right at the top which includes a lot of storage consulting if you want to support this channel Other ways is affiliate links down below to get you deals and discounts on products and services We talk about on this channel now everything is time indexed down below along with the links if you just want to jump ahead Feel free, but I want to set up the context and scope of this project One I only had time to test this with XC PNG the results may be different maybe in a future video I'll redo these tests with VMware I just didn't really have time to set up VMWare and compare because I'm curious if VMWare handles ice cozy and NFS differently and Versus XC PNG, but nonetheless, this is here. We are now ice cozy and NFS are shared Network storage tools and what this allows you to do is for example when we create an XC PNG pool We have three hypervisors in it They would connect to a switch and that switch would then connect to your NAS or sand depending on how you want to call it But essentially you have a shared storage where the VM storage lives in that way You're able to easily pass that running VM between the different hypervisors because the storage has a common place to live Whether you choose Synology ice cozy and NFS setups or TrueNAS ice cozy NFS setups either way This is a common layout Get more specifically though. What is the difference between ice cozy and NFS on ice cozy? The NAS server cannot see the files as it is just hosting the blocks of data and all file level functions are being handled by the Hypervisor this is because ice cozy presents over the network But as a block device to oversimplify it a bit We could just say a picture attaching a network cable to a hard drive So the NAS system may host the protocol ice cozy and functionally transport that data But it's a block device and basically doesn't have any insight into what file systems being used how it's being formatted It's just handling the bits and blocks on the back end This means all the VMs are stored in whatever Formats that the hypervisor has chose. So this ice cozy doesn't really have I mean There's probably some tools you can load on some NASs to mount it provided you mount it and whatever format the Hypervisor of choice has formatted the ice cozy in but as I stated, it's a block device being that It has a downside of being thick provision because it doesn't understand the files To do the thin provisioning and I won't go too far into it But thick and thin provisioning basically is an over provisioning method which allows you to say I can create a 60 gig Drive for a particular VM, but maybe it's only using 10 gig therefore in the NAS itself It only uses 10 gig and can expand later. This is kind of an advantage you have when you thin provision But ice cozy because the NAS is not understanding what's going on inside. It doesn't really have that opportunity back over to NFS The NAS server handles the files and each VM and the snapshots are in a VHD format So it can be viewed as a file on a NAS We're going to demonstrate that really quick here, but what the NFS share is just a standard file share So very similar to a way a Windows files type share works Which by the way Windows files sharing the server message block is not the best way to Connect VMs for those of you wondering the two popular options are NFS and ice cozy, which is what I'm talking about I'm but the NFS means the NAS system itself whatever file format it uses in the example of true NAS It's going to use ZFS an example of Synology it can be done either via extent for or Butter FS along with the Synology the way it handles the rate on the back end But either one of these systems can actually see the files All right, let's do a quick example of a thin provision and how the snapshots work So here we have the WN 11 Zen speed test and it's running on my true NAS mini lab With thin provisioning because it's done over NFS and there's only I think one disk in here So if we look there's the one 60 gig disk But what does that actually look like to the file system logged into the true NAS? It's only 36 gigs because it's thin provision. So if we go back over here and Click on this we're gonna go do a snapshot of this virtual machine new snapshot matter of fact, we'll Yeah, one snapshot is probably fine if we just Look at this again It created a couple of them because it's got to understand the differential between them But now they're only consuming a hundred and twenty 8k because nothing's changed VM's not doing anything right now So not much just happened. Let's go ahead and make the VM do something and I think I have some Random benchmarks we can do this is gonna read and write a bunch of little files and okay It's busy doing stuff now. So back over here and We've seen 2.1 gig worth of changes not another 36 gig as this file Changes as it does its read and writes and it's keeping the differentials. It's thin provision therefore. It's using very little space Right, so let's give the example here on this W and lab That's running on ice cozy if we go over here and create a snapshot for this one here We have a snapshot created and it's running on the ice cozy so we can click on this refresh and we see Quite a bit more data use instead of just using even though the only dis on here are this one and one snapshot it's Now using 115 gig as opposed to which should be only a few kilobytes because we aren't even actively doing anything with that VM either This is that thick versus then provisioning So this is a big consideration not just speed when you're doing this because the next problem You're gonna have is waiting for a VDI to coalesce and I've talked a little bit about this in the past But this can be a real challenge when you're thinking oh I only snapshot temporarily for a backup or for some reason and then I delete them later And this is where people have gotten themselves in trouble building out these systems thinking they have enough and we'll destroy this VDI And we're gonna create actually I probably should have done it with this here Snapshot will destroy this because they already destroyed the VDI, but now we'll do another snapshot And maybe we can even break this system Do another snapshot? Then we're gonna go ahead and delete these snapshots and what's gonna happen if we go back over to here We have VDIs to coalesce They need enough space to get rid of the Differentials because even though everything's happening in real time on the back end the way a virtualization system work Is you have to give it time to coalesce all the data and get rid of these this happens again with NFS But when the NFS being thin provisioned, it's not as big of a deal because it will coalesce over time And each one was only thin provisioned. It's only a few kilobytes So not a big deal to get rid of it because these are thick provisioned It can take a little bit more time But in the meantime you can run out of space waiting for something to coalesce because it had to allocate each one So those are couple of factors you really need to consider before you decide whether you want something set up with ice Guzzi or NFS. It's not just about speed It's about this factor right here and being able to say all right are these going to coalesce and is it gonna happen and Before and this is when you have a lot of backup jobs before that was next backup jobs run All right now on to the test results themselves now first is the Synology This is a Debian XC PNG on the Synology RS 3621 Every view coming up with this device part of the reviewing of it has been just beating it up with a lot of tests and over and over again, there's a couple times when the Veronica's test suite didn't want to run for whatever reason when I put the plus at the end of it It kept getting stuck. So that's why there's one more column on here, but it's the same system. It's still the plus model It just was being silly But when you start looking at the results here and a couple of them didn't run For reasons I'm also puzzled by so you see a couple where there's not a comparison But we'll jump right down to the comparison one in every category here And it's represented by anything green green or blue at the top here It was faster to run things on the ice cozy even when I ran the sequel test here, but how much faster? Well, this is 10 seconds versus 9 seconds 38 versus 33 seconds 80 versus 65 and we did on the sequel light 128 benchmark Start seeing ice cozy take, you know a lot less for some of these when it comes to his small rights Lots of small rights. We do notice an advantage of the concurrent small rights being a little bit faster Now one of the other things that happened here and I especially made sure I did this on shunas And this is an important part of thick versus thin provisioning Is running the test whites giving time for the vm to expand There is a slight performance hit you take if something is thin provision What that means is once you thin provision it because it has to expand You should at least run a couple times the test so the vm can expand to its larger size So if you copy the vm over and it's only it's a 60 gig Allocation but only 32 gig used in a thin provision You at least want to run it once so it gets up to whatever it's going to expand to This is not an issue when you thick provision something So you can just run the test once but that's also what causes some of these anomalies including especially on here It did the same thing again. It showed some of the tests running twice So there's not much variation here on the shunas system But let's talk about the numbers there because they're a little bit different So nfs versus ice cozy on shunas shunas gives you a lot more options and synology for fine tuning But I left everything at default so standard default data set size of a 128 k And this is where you could actually dive in to do some fine tuning either on ice cozy or nfs And I talked about this in benchmarking you fine tune in order to optimize for your workload This is a generic baseline set it at the middle 128 k block size Upping the block size or lowering the block size can cause Variations in the speed, but like I said, everything's left at default for this also asynchronous rights on both the shunas and on the Synology we're turned on for nfs. That's just an important tuning factor that you may want to have on there Mileage may vary. It's just a factor. I want to make sure it's documented here nfs on shunas ice cozy on shunas nfs on shunas the sequel light Barely faster Barely faster, but once we got up to these larger ones much like we see not analogy. There were some speed differences where Uh, it fell behind a little bit, but this is where things get a little strange Is it was able to perform better than in a few categories? So let's scroll down and look at where nfs one and I believe it was all in the streaming categories. Uh, would have been The larger block size tests right here flexible io tester random read We favored nfs Uh flexible random read on 256 kilobyte and one megabyte all favored on the nfs side same thing with some of the 16 kilobyte sequential Writing was actually faster, which I thought was interesting and we got about a 21 speed difference on the flexible io tester for So sequential write sequential write some of these I said were repeated. So this is just a duplicate Uh 256 and one meg again So there are some factors that it was a little bit faster But like I said, there's a lot more to it than just the speed the speed is obviously an issue if you're optimizing But the second part that was the thin versus thick provisioning Maybe more of a deciding factor than a five or six percent speed boost and obviously we see in some of these is high as Uh 80 percent, but that's also because the workload Maybe just more optimized with the default ice cozy setting versus the nfs settings But if I were to tune it for those settings that gave it the speed advantage on ice cozy Then it would probably lose its advantage of the higher larger file sizes that were able to be written So ultimately it comes down to which one works for you based on the workload You plan to run on it and if you plan to learn a very mixed workload No problem Just go ahead and leave it in the middle and kind of get as best as you can Which yeah the middle never really helps much But at least it's it's one of those things if you have mixed workload But this is where the opportunity comes into you if you set up different ice cozy extents For each workload and different nfs extents for each workload So you create a data set for each nfs for each optimized workload instead of just creating one large nfs pool Then it's kind of a big advantage the other one snapshotting in true nas the zfs snapshots are wonderful But they create kind of a problem for ice cozy because of the way ice cozy stores all of the vms into a Single ice cozy lund that may be a problem for you because what if there's eight vms in there and you want it to restore One of them by rolling back to a snapshot you can't you have to restore All of them when you're doing this inside of nfs the snapshots back to being just a file system You can snapshot it then you can fork the snapshot and go grab that one vhd file And copy it back over to the pool and you're done ice cozy because the nas itself is unaware of the file Structures within there unless you're using some other tool to mount it Creating a snapshot forking it attaching to that particular ice cozy one a lot more work And you can't manipulate the files natively on the nas so now you've got to try to figure out how to extract that vm out of there It's not an impossible task. It's just a way more labor intensive task and setup task Finally, I will do an upcoming video on storage design because there is another factor when you're doing this and the answer might be Why not both you may want to set up your nas where you have nfs for the vms but ice cozy for things like windows and this is a Common storage design where you take and the vm itself may run on nfs and that's where the windows vm lives But do you store everything within that windows vm not really? It's not a great idea You may want to present ice cozy to windows that way you can have a block device presented to windows that has All the full set of features that come with windows and ntfs presented over ice cozy as the storage device And that way your vm backups can be separate I'm going to do a future video talking about storage design because a lot of people just want to stuff everything into the vm And that's not the best place to do it You either want to use a separate nas for your file shares Or when you have to mount some type of data store because you're running a database and it's more practical To do so you may want to have the vm boot and then mount the nas again in a different way And like I said with windows ice cozy is a popular way to mount block devices But you can also if you have some linux servers that start up they can mount ice cozy or nfs Where you get the best of both worlds the virtual machine itself Reading from nfs is fine But because of performance reasons the actual data store may be on an ice cozy extent that's mounted by the vm As opposed to trying to stuff everything into the vm This also gives you the advantage of course of being able to back up a vm because the vm itself is small And gray log would be a great example of that. I've talked about this on the channel before I have to have a video on gray log gray log needs a large data store stuffing all that into a vm Not a great idea because if you wanted to back up your gray log vm You're gonna end up backing up all that data So just creating a small vm and then mounting the storage separately probably a better idea I'll probably use some videos on that in the future Leave some comments below so I can kind of get an idea what some of the gaps are actually a lot of my videos are Very much driven by the audience feedback or different knowledge gaps that people may say Hey, it'll be great if you could explain how this is set up So that's also a great discussion having the forums where all this will be linked to Links to everything down below links to some of the previous videos of course And if you want to interact directly with me and dive deeper into this topic The forums is a great place to engage with me on that or of course hit me up on twitter All right, and thanks and thank you for making it to the end of this video If you enjoyed this content, please give it a thumbs up If you'd like to see more content from this channel hit the subscribe button and the bell icon To hire a share project head over to laurance systems.com and click on the hires button right at the top To help this channel out in other ways There's a join button here for youtube and a patreon page where your support is greatly appreciated For deals discounts and offers check out our affiliate links in the descriptions of all of our videos Including a link to our shirt store where we have a wide variety of shirts and new designs come out well randomly So check back frequently And finally our forums forums at laurance systems.com is where you can have a more in-depth discussion about this video And other tech topics covered on this channel Thank you again, and we look forward to hearing from you in the meantime check out some of our other videos