 Tom here for more systems we're going to talk about creating a true nas system inside essentially nested inside of an xcp ng system Not something I recommend for production. There are use cases and Reasons you may want to do this if you have only a single server You want to build your lab out with this is pretty much the biggest use case because I do test out true nas Core in my lab from time to time not on hardware, but virtualize so quickly test and spin something up So those are the one time. I'll probably say this is a useful thing to do not something in production but I'm gonna show you how to do it. I'm gonna show you the performance results of Accessing the hard drives directly versus accessing them with a nested version of true nas some of the pros and cons and some of the Challenges you might run into when you're testing this and hopefully save you a little bit of time and headache If you think this is a good thing to run in production before we jump into that Let's first feel like to learn more about me or my company head over to Lawrence systems calm If you like to hire short project, there's a hires button right at the top if you like to help keep this channel Sponsor-free and thank you to everyone who already has there is a join button here for YouTube and a patreon page Your support is greatly appreciated if you're looking for deals or discounts on products and services We offer on this channel check out the affiliate links down below They're in the description of all of our videos including a link to our shirt store We have a wide variety of shirts that we sell and new designs come out Well randomly so check back frequently and finally our forums forums Dot Lawrence systems comm is where you can have a more in-depth discussion about this video and other tech topics You've seen on this channel now back to our content now. I'm going to first mention PCI past through this is what a lot of people talk about and I have found this to be a very hit-and-miss Process with true nas specifically it works fine PCI past through is a very functional part of XCPNG and I'll leave a link right here in the documents of exactly how to do it and One of the things you can do is take the controller For example if you have a system that boots off of one set of drives But then as a controller that controls another set of jive strives such as an LSI controller Set up to pass the drives through you could then use this process to pass that LSI controller through The reason I say this is hit-and-miss is I have seen people tell me it works great And I have also tested myself and seen it not work great Causing weird corruption errors and things like that it apparently some is and this is not just XCPNG There are sometimes inherent problems of passing through certain devices to PCI buses that is a little bit You know challenging sometimes the driver may not work properly It may not pass you properly and it's really not a problem that shows up at all until we've got the system under heavy IO load and we were loading about 15 VMs on the system And then the problem starts showing up in the form of CRC errors and a lot of ZFS errors that were hard to troubleshoot But it went away once we put this on direct hardware. So we knew the hardware was all fine The way we solved it in a way we're going to demonstrate this here and I've done a video on this topic already And I'll leave a link to it. So I'm not going to cover this part of it again is Zen server hard drive whole disk pass through now besides not being able to get smart status inside of The free NAS system. I did find this to be very stable way to do this And me and Xavier worked together to build his system out and we found this to be a very stable way to do this we're going to be doing some videos upcoming on that topic but by Taking Zen server and individually passing through the hard drives with this methodology We had no problems working it and this is the way I demo it in my lab now the server We're going to be doing this on I'll leave a link to is this super super micro super server a super storage server and That particular server with the same specs or what this system is running on So I'm not going to dive into every little nook and cranny of the specs on it You can just look through that video and I have all that outlined in there and I'll link to the forum post as well All right now what I have done is Loaded XC PNG on that server works fine So from a review standpoint of just using XC PNG on that super storage server no problems there But then the question of how do you configure all that storage? It's available in there And what are some of the methodologies? Well, you could let Zen orchestra Configure along with the XC PNG system and loads the FS on there and managed that way So I did and I also have true NAS 12 that we loaded on here and set that up now The first question about setting up true NAS core on here is What do I use as a template because I don't see a true NAS core template in there or any BSD templates? I just grabbed the Debian template no big deal there So we called it true NAS core 12 true NAS core We'll give it. Oh, I don't know 16 CPUs. Maybe eight gigs one more net Let's say 32 gigs Ram ZFS generally uses quite a bit of a memory for caching So the OS isn't really what takes all the space. It's gonna be the caching. So do that as you will go here True NAS. This is also how I demoed the true NAS scale video. I did recently, but once again, I use this in a lab So there's the true NAS 12 beta. We'll just choose that network. We do need a local storage desk to put this on We'll just make it 64 gigs And give it the name true NAS core drive now This is the first part where I'm basically using standard local storage That's on here to load there. So we're just gonna load it up on this local storage. Hit create. I Didn't want it to boot because I want to go over here to advanced a couple quick things that need to be set is nested virtualization If you don't turn this on I the jails will probably not work at all So do turn that on for that basically kind of like it sounds you're nesting one server into another So if you want that nested virtualization to work, you're gonna need to turn that part on I'm a don't need network boots the other thing. I'm gonna turn off other than that save and go over here and Start it up. I will run through the load process really quick Now there's only one drive. So I'm gonna choose this we have not added all the other drives This is just the local drive I created for this demo So just set it up on that one perceived installation Someone's listening and saying that's probably one two three four five six and you'd be correct Installation is done shut down the system All right, and right here. I'm in the directory SRV for drives for MVME drives my you know less than clever naming scheme And you can see that I have symbolically created now if you haven't watched or don't understand what I'm doing here That's what this video right here is for where it walks you through all the process of creating a symbolic links So you can pass through individual hard drives to Xcpng so that's all we did there is pass through those individual drives and when you go over here to storage They show up here and it sees the disc and I named them MVME actually I believe the default name is unknowns. So I called them MVME one two three actually somehow. I added one twice That's mistakes. I have made before We'll put this one as four just typo in there. There we go If you don't do this, it doesn't matter. These are these names are just for your reference It's just to keep your own sanity when you're naming them. So MV one two three four. There we go Then we're going back over here. We'll find our true NAS system Go to this we're going to attach them. This is Why you want to make sure you do this properly? So we want to choose the drive attach Attach another one number two You can see if these all had the same name it would be confusing Because you may or forget which one you've done And three there we go attach now here is the Standard zen repository that can have more than one drive assigned to it That's right here. We go back over to here These drives cannot be assigned anywhere else. These drives are Dedicated to free NAS and free NAS is getting raw access to them So they don't know and when you're here in zen orchestra under storage although it you can click on this It doesn't understand what's on there So it always shows the total of the drives but does not understand the contents of the drives So it'll always just look like this no matter what you add to them And yes, you can actually attach them to another VM It will be unpredictable behavior. So that might be a fun experiment to do that But if you're Attaching to this particular free NAS now what you could do is as long as you're not running at the same time It's possibly connect them to two different instances Of true NAS that have access to those drives at different times It might be an interesting experiment But ideally when you're setting this up for true NAS you want to put them inside of here So now we can fire up our true NAS system go ahead and start this up And it's booting off of here, which this one I can do all the normal snapshots By the way when you do a snapshot It's going to give an error on these drives here because you can't snapshot all those drives So if we say new snapshot with the system, it should also produce an error as well And Operations not permitted because it's operation is not permitted on those other drives and trying to snapshot those other drives as well So just be aware those are another challenge you'll run into as you have to Well, there's some advanced ways to do it where you don't snapshot these When you're doing it so either way you have to be careful Because these are different challenges you add when you are passing through either the hardware or the drive Now if you pass you to hardware, this wouldn't be the case because you wouldn't even see these drives inherent Because the hardware will be passed through but warning if you try it that way and you run into some weird problems under heavy load That might be the reason why it's a pass through and I don't have an exhaustive list of compatible cards that have no problems And ones that work well So I'll just have to roll with it from here But by doing it this way and it's running through his first time setup I've found it to consistently work and work Relatively well except for and we'll get into some benchmarking here in a second once we get into Show you when it's up and running so free nas true nas keep getting mixed up true nas is booted And 3.142 so we'll log in here with our fancy one two three four five six password Get started Storage pools add Create a pool for mvme Drives Let's encrypt them. I understand I'm going to lose data if I don't back up the encryption And we'll do them as raid Z so to standard raid z one four drives. No problem create confirm Oh name must not become of the later fo you are That's a zfs thing. Don't put a letter in front great confirm Download the encryption key don't lose that if you do you won't get the data back if you choose to encrypt drives I recommend encryption. I recommend backing up the key and that's pretty much it from here It's just a normal free nas setup. So Zen test We do want sync disabled because we're going to do this as nfs submit quick and dirty We're just going to open up the read write permissions to everything save Sharing and we're going to do an nfs share submit Enable service no problem Go back over to here We want a new storage So this is our nested true nas core storage description copy pasta nfs 192.1683.142 Hit the little question find the path. Hey, there it is Create Hola, we have it. It's working now We have a nested true nas core that we have to first Boot the system in terms of xtp ng then we have to boot true nas Then we can boot any vms that are stored on this one nested inside of here Now this is where the good and bad comes in Free bsd has Not the best support for zen server and what I mean by that is we're going to talk about how fast it can talk to it So this is all physically in one server, which means the local networking is very very local So we're going to pull up iprf on here. So we'll do iprf 3 This is on the actual serve xtp ng itself. So iprf 3 dash client 1 i 2 1 6 8 dot 3 dot 142 actually I got to turn on in true nas here services edit Log in as root with password save Turn ssh on All right ssh is ready ssh root 1 2 3 4 5 6. All right, and we'll do iprf 3 dash s for server Then over here see for client And how fast does it talk? Not bad so Good news is free bsd is actually getting some reasonable speed inside of here. So we Are able to get this system to Talk at 15 gigs a second now The reason and how this actually works internally is the free bsd is connected to the same bus interface But does not necessarily have a speed attachment speed is when you're dealing with physical network interfaces when you have virtual ones Your limitations are basically the internal bus architecture of the xcp ng server So in this particular instance with this particular server we're able to get 15 gigs a second So that's a reasonably impressive number that we have here So now from here looks like it bounced up to 18. That's cool 17 all right quite a bit of speed that we're getting internally so Now we can go and dive into some of the testing And for that we have a vm running foronix. So we'll go back over here To begin with foronix and currently I have it setting on the local mvme storage So we're actually going to do a test with the local mvme storage and see how fast it is. So we'll go ahead and Run it on here We'll fire it up here and then we'll migrate it over now I'm going to fast forward and jump through to get some of these tests run. So you have a Don't have to wait for them all to configure but I will leave of course the foronix link when it's done For doing this and foronix if you're not familiar is a free Benchmarking software that you can download for linux We skipped ahead a lot because there's a lot of testing I did to run all of these And I got some interesting results. So on the single mvme storage day is the boot drive in this one Which is a standard mvme Pretty fast local storage, which is to be expected when you're having it right to a single device And I have another local mvme test and what I did was the four mvmes are in the front of this particular server I was testing out so we have Similar results when you're running single, which just kind of makes sense. You're going to get, you know, raw io performance relatively fast in mvme There's no processor power needed really in terms of calculating the software raid for zfs or anything like that Then we did xcp and g with local zfs sync disabled and xcp and g managed local zfs Now both of these are just default as in Just threw it out there turn it on There's all kinds of tuning you can do inside of xcp and g to better enhance the performance of zfs Including uh, because it runs in dom zero adding more memory to what do you refer to as dom zero to further help the caching Did none of that. I just turned it on Set up these drives so I could have a baseline for what these drives perform at Then we had the trunas test and I had trunas core Test test two now the only thing different on this very one the very first one or bottom one I should say was I was training trying to turn the hardware off loading off on the network interface to see if it would make it Better or worse it did seem to make it worse And this is where things get a little bit confusing these tests were just run back to back And these are the kind of bugs I've seen when you virtualize trunas Although it seems to be quite stable in this configuration. It doesn't crash. It doesn't give me any crc errors Even after all these tests I was running we'll go back over to the dashboard The drives are fine the pool shows perfectly fine online no errors So no issues with the pool look back over here. Just look at the status The pool is perfectly fine, but there's plenty of deviation. So the first test uh, was that actually I just say the third test I think was this one So I ran them a couple times to see if it would do something different And it was running up here now this bracket represents the wide deviation of performance As a matter of fact the test took very long to run because the way the front end system works is that there's way too Much deviation a test to keep running a few times to try to get a bigger and bigger average Um, it didn't have to do that at all in the other scenario when it was running on the local one And then once it decided to go slower, which I don't know what the cause was It decided to stop right here and stay there. Uh, so once it hit the 139 you can run it again It would show up roughly the same again. So I thought that was kind of interesting And then we go down here. This is the random read engine This is the iops for getting once again really high iop performance from the single drives individually Reasonably good from here and of course consistent and there's our bracket of wide Uh performance potentially issues there and we back over here to the more consistent numbers that fell down to This is with the hardware offloading Set to normal. This is with harvillar hardware offloading on the network are turned off And we go down to a random write test and kind of similar results The writing was a little bit slow on all of them Once it got into zfs for this configuration, like I said, this is something that can be tuned You can also add because this was an nfs, right? You could add a slog drive to zfs Which would have enhanced it and that can apply to free nas or any of these But I just want to go over some of the things about how this worked It should show you that it does work the scenario you may want to use it Like I said for setting up something in your lab But do expect that there will be some performance issues, which is one of the reasons I don't recommend doing this inside of A you know production environment. This is less ideal for that But if you wanted to or your budget only allows for a single server And you want to know if you can run free nas inside of here It does seem to be quite stable as long as you're passing the drive through it works. It's functional. Um, it's A little bit inconsistent sometimes is the one challenge I found which is weird because the random write test became all the way across all of them relatively stable On there. So this is once again the true nas core with the hardware network offload turned Um to sets a normal not turned off and then turned off here I should have labeled them as such Um, I was just gonna hurry so to speak as I was getting aggravated with the inconsistent results to try to find out Why they were inconsistent and well, it's just because it's running virtualized The still the best and most ideal situation is going to be running true nas on dedicated Raw access to the hardware that's where you're going to get the best performance and have the most options for performance tuning on it Because you're not dealing with any extra virtualization layers Or you can run xcp ng if you just want to take a group of drives and manage them via zfs granted with xcp ng when you do this There are work instructions they have for setting up zfs and xcp ng You don't get a ui to manage functional parts of zfs you manage that from the command line I'll still is out there for notes and I'll leave links to it And of course the documentation if you want to try the pc i pass through because maybe your card works very good with this And that would be awesome because then you could pass this through and maybe get a little bit better performance But still once you virtualize true nas at all you're going to have some quirkiness with it. All right, thanks And thank you for making it to the end of the video If you like this video, please give it a thumbs up if you'd like to see more content from the channel Hit the subscribe button and hit the bell icon if you like youtube to notify you when new videos come out If you'd like to hire us head over to laurancesystems.com fill out our contact page And let us know what we can help you with and what projects you'd like us to work together on If you want to carry on the discussion head over to forums.laurancesystems.com Or we can carry on the discussion about this video other videos or other tech topics in general Even suggestions for new videos. They're accepted right there on our forums, which are free Also, if you like to help the channel in other ways head over to our affiliate page We have a lot of great tech offers for you and once again, thanks for watching and see you next time