 Co-people messaged me about the follow-up from my lab, and I still see people. I'm happy that my video on Citrix Zen server is doing well But I have mentioned people just replace all of that. The video is still relevant. Just change everything over to xcp ng or next-generation as a replacement for Citrix Zen server. I replaced all of my servers with it. I know someone's screaming, but Tom test Proxmox I tested it. Maybe I'll go at it again. I know they released a new version. It really I don't know. I didn't like it as much as I like the whole Zen server. And part of it is I started getting into Citrix because I had some clients running it I have a friend working at a very large company with tons of VMs, and they're very happy with the way it works. Everything goes well I didn't like the way Citrix changed the licensing around. I'm a huge fan of open-source software The Zen itself is open-source and Citrix puts their own spin on it. These guys over here at the xcp ng.org really did a nice job. They kicked off a Kickstarter, got funded, and rebuilt Zen server next-generation. They've added some new features. All the features are enabled, not like Citrix where they took the open-source project and stripped out features and then spun it for you. So you had to go recompile it yourself if you want. It was well more than that. They were being Citrix. Watch my rant video on it. I'm not gonna go on about it. I didn't like the way they handled it. And neither did they. It turns out the community was really happy because they well were overfunded for what their original goal was, which is great because it's gonna push for further development on this product. And their business model is simple. They offer pro support. So 100% no licenses, no registration needed just to even get a download, like Citrix is doing. And there's a couple different ways to manage it. So let me first start with the layout of all this. So we'll start here. Just to give you a brief overview. I have two servers. One called backup that's usually not turned on. It's only on when I need it because it's a plan B if this explodes, either the FreeNAS explodes or this explodes. Those are unlikely scenarios, but that's what you're planned for in IT. Unlikely scenarios, you know, things like something happening to the building physically or whatever. We do full VM backups. I've covered this before and we take them off site. It gives me a level of comfort that I can restore not in days, but in hours from a complete, like my building is not here anymore type of scenario. Hopefully that never occurs. But you always plan for the worst, hope for the best. That's a good plan to go with backups. Have backups of your backups and a plan to go around the backups of what you're gonna do instead. Okay, probably dwell on that. My Zen backup server is the in-house backup server. I have another server at my house that can be used as well to help run my company. That's also running Zen. So Zen backup and then we have the affectionately named Zenifert. All this is running XCPNG. Then we have our FreeNAS machine and the FreeNAS is the storage server for this one, but not for this one. And the reason why is this points of failure. So this has a 10 gig connection to all my VMs here because I play with a lot of VMs for my YouTube channel for ideas I have and my hobby is playing with virtual machines and setting up scenarios. And sometimes you have to set up scenarios for clients, special projects and other off the wall things I work on. So we build out all kinds of virtualized infrastructure, test it out or test theories I have or demo new software. Even not just for the YouTube channel, but for example when we had to come up with a custom config for a client shop, no problem. We just built it in the virtual lab. Does this work the way we think it works? Yes, it does cool deploy. When we're testing backup software, people always ask me to test things. I've done videos like we had a custom backup solution that we used instead of the SolarWinds product. The client wanted to use something else and that introduced me to CloudBerry, which I've talked about before. I really liked their product and we did all the testing. You build your VMs, you randomly crash them, you see how CloudBerry handles the restores and I have a video on that. And we did it all in a virtual lab because that's really convenient to do. So I have this 10 gig connection here and there's a series of drives inside the FreeNAS. One set of drives is for my production stuff that I don't really do anything else with. That way none of my lab stuff could potentially cause any potential issues with my production. Plus I do the LVM over iSCSI, that's how these are connected to each other. And I don't really want to worry about any performance degradation or anything like that. So all of that is sitting on a set of production drives that is what I've referred to as production drives on the FreeNAS. Then I have my FreeNAS lab set of drives. It's another ZFS array. So I have all of it broke down into two separate arrays. That's how I keep things separate in terms of performance and we'll get to that in a second. So then we have the networks and I kind of have it broke out here. So this is like the secure network that the production stuff attaches to. And there's multiple physical network cards in these devices to keep them physically separated. I know someone's screaming, just put it all on a series of VLANs and stuff like that. Yeah, I know I could. I have an unmanaged switch and it's just there's no need to worry about management. They physically plug in, there's no worries about interference or congestion. Everything that runs the company is on these purple ones here. Then we have the unified switch, which does have all kinds of VLANs and trunking and everything else that happens here. Now PF Sense is the glue that holds this together. So the internet comes in, comes in the PF Sense. It distributes to the purple network, which, you know, like I said, the purple network being a secure network, everything is statically assigned, not even a DHCP server on there. So everything's very restrictive and only a couple pinhole rules that allow only the necessities to pass through here to get to anything over here. Because this is like, is it where we host our internal software and then each one of the Linux machines that runs inside of there are hardened as well. So if you were inside that network, lateral movement isn't possible because they have their own firewall set up and their own sets of rules that keep you from just logging into them. There's no open ports other than what's necessary, blah, blah, blah. That's kind of how we keep that, you know, level of separation. And that's also why they're on an unmanaged switch. They're really, really locked down. And most things, even if you're using a managed switch, by the way, because all the virtual servers are actually running here on the Xenifer box. Even if you physically unplug Xenifer from the unmanaged switch, they can all talk to each other because when you create a virtual machine and you put everything on the same network, there's technically a, there's actually a tool called vSwitch, but there's always at least a level of virtual switching going on because they all tie together. Let me show you how that works real quick. So I have this kind of maybe a little bit confusing, but like this is the, when I build a PF Sense test lab, this is from that video. You can build virtual switches and pretend this is just a network cable plugging these two devices. But if you were to create another device like this, I'm going to pull it in here because I just duplicated it outside there. But if you do that, these devices, sorry, it looks kind of goofy, are on the same network. So because they're not leaving to go outside of the network. So everything that we create inside of our virtual lab, everything we create in the virtual lab here automatically gets to be on the same network. So as you create these other virtual boxes, even if there's nothing outside that's representing inside, they all get attached. So even if you have a managed switch to say, to put some routings and the two devices can't talk to each other, they will, because they don't go out to the switch and back, they're done internally. These ways you can make it do that, but the default way, and this is the way we have it set up. So before I digress too far into network switching and routing, the Unify switch is VLAN enabled, and that's how we handle all the other networking. So that way we can have VLAN tagging on either one of these and the free NAS and create different virtual LANs. Now the 10 gigabit link between here is a direct connection, no switch in between. This is, I did a video on this of how to set up 10 gig networking. I do it with Citrix Sensor, but it applies once again exactly the same to XCPNG. Now, this is a, like I said, I really call it the TwinX SFP plus direct line plug-in. You just statically assign an IP to the free NAS for that network interface. You just statically assign one over here to the network interface. And now these two devices talk over 10 gigabit. So when you're doing your iSCSI target, you just use that 10 gigabit IP address. There's no routing needed. There's not even a gateway needed. Matter of fact, leave the gateway out because if you put a gateway in, it'll think it's something you can route over. So that's not even necessary to make that work. So you just put these in and it just becomes your storage, your SAN network. SAN, SAN, SAN, SAN to get the data talking back and forth. But this provides a really fast link at 10 gigabit for getting all the data back and forth. It will work over one gigabit. You can do iSCSI over one gigabit and use FreeNAS to your storage. It's actually quite fast. Maybe sometime iSCSI sit down and build it out. But I'll do a test scenario for 10 gig versus one gig performance and what you get from it. I will tell you the speed at the 10 gig is, I believe, is faster than the drives can handle, even though they're a ZFS array. I'm getting close to, not completely, but close to SSD performance in the 400 and 500 megabit range of drive speeds. So you see it really nice. I'll show you the demo real quick on how fast the drives are. So we're going to close this. Don't need to save my goof ups there. And let's actually look at the software running. So this, and this is me playing with it, like I said, that's the wrong one. Let's go a little further back. These are all the VMs I have in. This is the Zen Orchestra and this is mostly how I manage it. Now, for those of you wondering, and I'll just cover it real quick. There is the XCP-NG Center. This is in beta. I haven't had any problems with it at all. I've tested with it. It seems to work perfectly fine. I got the two different servers connected to it, XCP-NG and Zennifer, and all the VMs that you can see here that we have set up. It works fine. I just don't use it that much. I don't have a big use case for it. The only thing, and I've talked to the developers and are going to be adding, is when you're setting up on Zennifer here, and I have, and I go into network, the only thing they don't have is an ability to create. That's the NICs. This side only, like this LAN only exists on Zennifer. The description that I typed in, you notice how it's not VLAN. It's not bound to a NIC. The only thing you can't do that I know of right now, and like I said, I have a request on GitHub for them to fix this, and they said they would, is to create a host-only networking. One of the reasons we use host-only networking is like if you watch my VPN, what leaks out of a VPN video, you can create a host-only network on here. When you don't tie it to a network card, there's absolutely no noise on it. It exists only inside of this machine, and by doing that, it's a great way to create an isolated network where you can say, all right, here's this device, here's this device. For example, when we set up a virtual PF sense for testing, or for full network scanning, because we were doing that with Wireshark in that video, you want to watch exactly what goes across. You don't want to worry about anything else being on the line, so you can just create these. And if you're ever doing any testing with security software, or you wanted to do any malware testing or anything like that, once again, put it on its own separated network. It can never escape because it doesn't have a physical network card to leave. It only can do on there. And it's easy to do when you go to add network. You can single-server private network. Like I said, they're going to add that. Now, cross-server network is kind of neat. This is where I mentioned that there's a V-switch software. If you use V-switch, you can then extend that host-only not tied to a network card network to other servers. So V-switch, you merge all the servers in, and maybe I'll do a demo on that at some point. It's a virtual switch, and you can take all your XCP NG servers or Citrix servers and merge them all so they all share a virtual switch. That's a little bit more complicated and goes outside the scope we're going to talk to here. And here's how you put the VLAN IDs on there. I've had a couple of people tell me they don't work. I don't understand that right here, they're work. They're VLAN 10. And when you add a network, we'll just run through it real quick. So if we external network, new network, there's your VLAN ID. And you figure out what physical NIC you want to tie it to, set the VLAN ID. I've done this many times. I've done it both here and the XCP NG software. So for all your normal stuff, it works perfectly fine. There's that storage network that I talked about, and you can see it down here. These are your storage.2. I called it storage 2 because I was running some storage across there. Sometimes I, for testing purposes, I have it called storage 2. Storage 10G, here's that one here. You notice how there's a lack of a gateway. So this is 10.15 and the free NAS is 10.10, and that's how they talk to each other with the iSCSI for the storage. And here's how you can look at the storage. Like I said, if you've looked at my video on CitrixN, this actually applies completely. Now, a couple people ask about things like thin provisioning. LVM over iSCSI with Zen server as a whole does not support thin provisioning that I know of under this. And if you're not familiar with thin provisioning, it's when you allocate a server, let's say 100 gigs, it actually pulls the 100 gigs from here. It does, but it doesn't when you use free NAS and ZFS as your back end. First, when I'm designing this, if you have 10 terabytes of storage, you can put five or six terabytes, less than the maximum on there. That will give ZFS plenty of headroom to automatically compress and make everything work right. Maybe I want to, if they have time, I want to get one of the free NAS engineers to get on there and explain it. And this is how you can keep away from fragmentation, because as long as free NAS has a lot of room, when you design it, you have all this room in there. Now, why does that matter for thin provisioning? Well, really simple is in, I think I may have covered this in a video. If not, I'll look, maybe I'll do another video on this. I'd love to get the free NAS engineers, because I've talked to them about this a couple of times. When you are allocating this, maybe I allocate 100 gigs for a virtual machine over LVM iSCSI. It doesn't take up 100 gigs on free NAS, because free NAS, ZFS goes, there's a lot of blank space and ZFS compression goes, I'm just going to use what's needed. So you can potentially over-provision that way, because free NAS is going, hey, these are all compressed. The downside is when you over-provision, if you ever expand those, you run into a really big problem if you did over-provision. So I highly recommend never over-provisioning, but it will do some compression and be more efficient with it, and that also cuts down any fragmentation. So I'll get someone more intelligent on this, perhaps at some time to talk about it. I'm also friends, look up Michael Lucas. He wrote two books on ZFS, if you want a better understanding of it. I've met him and hung out with him several times. He happens to live in a Detroit area. Guy's a genius on this stuff. He has a lot of good books, so probably digress on that if you want some learning. Look up Michael Lucas in all of his Linux and ZFS books. All right, so enough about this. I mean, this works perfectly fine. I've used the beta, like I said, I have any problems with it. I think it will still connect to Zen servers that are done by Citrix as well, but the Citrix Zen Center won't connect to the newer version of XCPNG. It gives an error, so that's what this is. I'll leave you a link where you can download this. I'm going to shut this down. Like I said, I don't use it very often, but I want to show you that it exists for those of you that are used to using it that way. The only time I've used it is if I've got to create new networks that are locked down like I've shown. So here's the Zen Orchestra. So you have the full feature set. I've got the Community Edition pulled up. Now, Community Edition means, just let me weigh it a little bit down, means like it says here, no support. That's because this is, you know, rolled myself. I maintain it. This is, you know, for my virtual lab playing and stuff like that. And some of my production stuff runs in here. They have a free version. Now, there's a couple differences in a free version. I think it would be one of the free version lacks, like the backup isn't enabled in there. And some of the statistics pages you can't view. So if we go here, we go to hosts. We go here, stats. This is not enabled when you have the free version. Now, the free version, obviously you designed for home use. This is open source one that is, you know, the full version, but you have no support from them. And if you want paid support, you go to Zen Orchestra. And the Zen Orchestra people are the people also behind the XCPNG. There's a crossover of developers on both of them. That's why they're so good at doing all this. And this is a great tool for managing the VMs. So if you have a lot of servers, it's worth buying their paid version. And, of course, it comes with full support. And you can do all kinds of fun stuff with it. They add even more availability in the paid version than you have here. I forget what else they roll in. They have some V-SAN stuff, XO-SAN, I think it is. Yeah, not available. They have XO-SAN. They have a lot of advanced features if you manage at scale a lot of these. So take a look at that product. Great. For purposes of this, I can leave you... I think I've done a video on where you can get this. There's a couple auto install scripts that will build this for you. It has broken before FYI when you're building this. Sometimes you've got to goof around with it. So yeah, it's not supported. And just jump in there and start working on it. So let's talk about the server. So I'm going to go filter for production. Like, here's the servers that generally run our company. We are still running free PBX in here. OSEC to keep a babysit on all my stuff. This is the DB9XO. This is the free version of XO. Because when you do updates and break this, you need another way to manage it sometimes. You can also manage a lot of this from the command line. That's something I do a lot too. You can import, export VMs and move things around with these and command line. Maybe one day I'll just do a video on all the command line interface stuff you can do. It's pretty slick. And if you manage things at scale frequently, you go to the command line to do a mass startup or start things, start, stop and move things around. I've actually covered this in how the backups are done. I've played with the backups in here. They seem to work pretty well too. But I like the backup script because, well, I like scripts that automate things. And look for my video on backing up send servers. I have links to the script and how to set it up. It's not too extensive, but I walk through the details. But here are the machines that run the Wiki server, the Unify video. Kind of any features being able to go right here, see what their usage is. Free PBX. And this is the, we're sunsetting. This is our old POS system. I didn't bother loading. But with the tools loaded, it gives you stats right here. Free PBX didn't like the tools loaded. Well, it probably did. I didn't recognize the Sangoma spin of CentOS. I just didn't force it. And I don't care that much. Without the tools loaded though, you don't have the memory usage stats. It doesn't have as much detail in there. And when we look at any one of these, you can see the disk, put network there, put memory usage, two hours last week, things like that. And once again, this is because it's in the community edition. The stats are something they took out of the other one. So you can look at any of them. Now, if we do none, you can see all my different VMs I have running. And we have a lot of stuff. Like right now we're doing some testing with this Windows Server 2016. Because I have another video coming up and I'll tell you, it is related to FreeNAS in Server 2016. We did some, once again, some testing scenarios and stuff we're playing with on there. And it's upcoming. That's later. But it does work, by the way. We figured that much out. It's tying them together. And a lot of people would ask them for a walkthrough on that. And it's going to come on how to make this work in your network. But running FreeNAS inside here before someone even asked, not a great idea. But FreeNAS is best for setting up directly with a bunch of disks. I don't have a big scenario why I would want to run it in here. But you can do nested virtualization if you want to try. I don't know how well it works. But I don't recommend running FreeNAS in here. We're going to run it in here because it's easy to run it as a test server and build out a scenario and then delete it when we're done or reset everything rather than building a physical FreeNAS box. Way easier to build them here. Same with PSense. I have my PSense lab right here. It works. There's a couple of things. I don't believe the traffic shaping works as well. There's always little quirks whenever you run virtualized instances of firewalls. You can make it work. It's a little bit trickier. To me, I always run it on real hardware. And if you type in, not just in Zen, but if you type in like PSense in any type of hypervisor, there's always a tricky part of making it work. And I believe that people over at PSense are working on a new in-the-cloud system as they call it that is designed to run in things like AWS and Azure. So there's other stuff coming and other stuff designed more for that. Like I said, I run everything on the physical hardware when it comes to my networking equipment. All the machines here, you know, being able to create snapshots, that's all fine and dandy. That works great. And let's actually show here. Let me filter for some of the lab stuff. Here's like my DB9 base on Xenifer. And here's the, you know, XEP and backup. And you can go here. I can get to the console, get to the network. Oh, this works by the way. The console, if we go here, works fine for Windows and things like that if you don't want to SSH in or anything like that. You can do that. We actually use our screen connect to connect to the Windows boxes and just SSH for the DB1s. But like here's my WN base. So here's the networks that you've seen. And like here's that LAN of Xen.2.3. That's the ending of them. You can, but I would not recommend it connected to like my storage network. And here's a couple of those VLANs. I have VLAN 10 and VLAN 69. So even while it's running, it's like, okay, if we want to swap it to another network, drop it on VLAN 69. It's going to take a second and then these will refresh right here to tell me what the IP address is there, or we can go to the console. And it's changed now to 172. It changes right away and it takes a second or so and then it'll refresh in here. So it takes, I don't know, maybe... And it changes the IP address immediately. It just takes a little while before this part refresh. This is the... And you have to have the Xen tools loaded to do this. But that's not really why we're here. Actually, let's put it back on the .3 network. Go back over to console. And right now it's changed. I know it's kind of small to read, but it's right back on the .3 network. And we'll talk a little bit. Now, this is on the lab drives, not on the production drives. And so let's go over here. Well, this probably... All right, we're SSH-ed into it. So there's that address, 192.1683.190. And I was actually just running some tests on it right here. So we'll do three. This is the Foronix test suites free to download. All I'm doing is, as a demo here, I know this is not an extensive benchmark. There's a ton of factors that go into it. But just in general, if you want to talk about the speed that the machine can run at, what it can do performance-wise. And like I said, these are the lab drives. And we're getting about, was it 480 on there? Granted, these are smaller tests, and these are general. And I've always liked this. There's lies, dam lies, statistics, and then there's benchmarks. And synthetic benchmarks are so hard to reference directly. And you have to do really extensive, not just, oh, I ran this file test, and it moved a file across this fast. So that's how fast it is, right? There's so many more factors. Like, are you running databases? Are you running VMs on here? What are you running on there? Because your load case will change, and there's planning. And that's where the engineers come in. Like when we sell a really enterprise-level thing like TrueNAS, you work with the engineering team to understand the workloads and go into it to design the system to go with it. This is just a basic, you know, ZFS system RAID-Z2 setup for my lab drives. And it works perfectly fine in, you know, storage data. So there's the test running. We're seeing 527 on the, well, down. Yeah, the 527 on the writes. The reads are at 403. Weird, the writes are a little bit faster. But you can see a pretty good performance on there. Now, because I can restore these back and forth to different VMs, move them around, that's actually some of the cool features. I can migrate this VM running over to another machine. But one little hiccup here, let's go over here to the hosts. So here's this system here, the XCPM backup. The problem is this backup happens to be on an AMD box. And let me go over here to the hosts. This one's on an Intel. That being said, this is actually running on the PowerEdge R710. Works perfectly fine. I know it's an older box, someone's going, but put it in the cloud, buy a brand new one, blah, blah, blah, whatever. The, this box here being that it's Intel, you can't pass if the architecture types are too different. You can have maybe a different processor. I don't know how far out there you can get. The feature sets of the processors are the same. Then you can move running VMs back and forth. You can't move running VMs back and forth if there's architecture type difference, because the computers go, the computer itself does get to see into the processor. And because of that, so if we, oops, we'll just do this cat. Because the VMs can see that this is an Intel Xeon X5 5670 at 2.9 gigahertz. It sees the processors. That information can't change in a running VM very well. I think there's some work or some tools probably that do allow certain things to happen. But for scope of the way it works default install here. No, you can't just move between machines if they're completely different architecture types in a processor, so I can't move a running machine. But what I can do is move a machine that is running on Intel, shut it down, move it, boot it up, and it'll just rediscover the processor. So let's go here, show you that it's stopping. We're going to stop this VM. And this is kind of my handy dandy db9 base is kind of like it says. I use it as the base for upgrading or building any of the VMs. So we're going to go here to fast clone. We click it. And go here. And it'll actually add the word clone to it. And I didn't... This is all in real time. I didn't have to cut this because it clones like that now. Cloned. This now I fire up the clone. And you'll see how fast this is. I'm going to call it our YouTube demo. Whoops. Hold on. I clicked on the screen. I go back here. It got locked in the bottom part, so it wouldn't let me type. db and base YouTube demo. Oh, by the way, it was booted already while I was goofing around. It boots up in a couple seconds. Granted, this VM is only a clone that fast, which the VM itself has got just a 16 gig drive and one gig of RAM. So it boots up like immediately. It's fast. Everything works. And because it's a clone, actually, let's go ahead. So connection closed. It's a clone. And it's actually probably going to get the same IP address. We'll go here in a session. Let's log in and look. Yep. Got the same IP address over here. And if you look at the last commands run, look. Same commands are run because it's an absolute clone of it. But that's the nice thing about it. So I can take something, do a clone. Then I can, if I need to, look over here. If I needed to create a snapshot, it'll create a snapshot that quick. So now I have a snapshot of it. And if I need to roll it back, I can revert this, and it'll restart the VM back at the snapshot. And I'm going to go ahead and delete it because I don't care about doing it right now. So this makes it really easy to handle moving the VMs around. Now, right now, this by default, it always creates them on the FreeNAS lab drive. So if we go back over here, we're going to stop the VM because it'll move way faster. It'll move live VMs. They move faster when they're not live. And I'm not patient right now. But let's migrate the storage to another part. So like I said, when you go to the storage, and I'm going to open up a new window, so I'm going to close that. Here's our production one where all of the production's sitting. And it shows you how much is used, how much is provision. And these, like I said, this is not thin provision. So it says 256. So out of the six terabytes that I have assigned to this, there's actually 10 terabytes available. This way I can never over provision. But the actual usage on here, because of the compression ratios, you can see it's compressing at 1.49. So you're getting this bigger compression efficiency to keep down fragmentation and everything else and keep up speed and it works really, really well. So you're not, even though it's got this much, it's, like I said, I'm going to get an engineer who can explain this better. I've talked to them about it and this is the way, how they have it set up. I believe I've got it all set up right. But I think it'd be a fun conversation to talk about storage planning with one of their engineers. I'm going to see if they'll reach out to us and be able to get one of them on there. So we've worked with them on a few things, but I think that'd be kind of fun. Because getting into this, how storage works is really cool. But it's also really complicated and I want to make sure it's right. But before we get too far off topic, let's migrate this over to my production where we have actually newer, faster hard drives. We go to here in production and migrate all VDIs. If you actually had a bunch of virtual disattances to this, you could. So we're just going to migrate the one. We'll hit OK. And this will move over really fast. So if you look over here at tasks, it actually sits here at 0% for a second. Probably a preparation thing it does. And then you'll watch it kind of scoot right across. But the machine while we're doing this is not under duress. Everything else works perfectly fine. We can play with other stuff. We can still clone all the other features work so that it'll stack all the tasks over in here. See now it's 9, 10. Then you're going to watch a couple hops where it jumps. Drink some coffee while this goes. What's weird is if you move a bigger VM, it doesn't scale exactly. When I move some of the larger VMs, they still seem to move about the same amount of time, maybe a little bit longer for them. So there's not, as I've moved around, because we added more hard drives recently and swapped them over. So we, because the FS can't be expanded if you're not familiar with that, you can't just expand and build, add to. So anyway, he's rebuilt and we just put these new drives in, which by the way, if you want to know what drives I'm using real quick, all the drives in our stack of FreeNAS system are these desktop NASs. I've been thrilled with them, knock on wood, no failures. We've used these a lot. They came recommended, so to speak, if you watch my video on back plays and their stats. These are the drives that they have the least amount of problems with over time, and they have a ton of them in there. The back plays stats, and I have a video on that, that break down what those stats mean and why they're important. And it's hard to get aggregate drive testing. I mean, I could probably sit there and tell you from a retail consumer what comes across the counter that, oh, this particular drive is bad. We see a high failure rate, but what you don't have is the other side of that stat. That's why you can't say that properly, is do you have the stat for how many were in the market? So you can say, I see fewer of these failing, but very few were sold. Well, yeah, statistically, if all of them failed, you wouldn't see that many. So you got to make sure you understand how the stats are done. Well, that's something they break down over there at back plays is we have 8,000 of these hard drives running for six months, and this is how many failed. Now you have a score and a statistic because you have the different factors. You know how many there were, because they could have won, it never fails, and they'd have 100% non-failure rate because they had one installed and it never failed. That's not a good statistic. They break all that down, and we have had, knock on wood, have had the same experience where we haven't had any of these bad. The only one I've ever returned of these Death Stars was one that we actually tested it. Out of the box, we pulled it, and this was a while ago, it was a strange thing, and they replaced it for us. We did test it, and it was bad. The whole corner of the hard drive was cracked, brand new out of the box, and we were like, they're going to blame us for dropping it because we cut the seal on this. Someone dropped the hard drive and put it in a box. Only time has ever happened in my life, and I tell you, we have went through stacks of hard drives building rate arrays, fixing rate arrays with these drives, and I've never, ever in my life of 20 years in career, seen any hard drive out of the box, like the corner was smashed, like it was dropped on cement, and the drive was bad. It was the only time we had one of these Death Stars NASSes in return, but they work really well. We have the two-terabyte versions in our thing, three-terabyte versions in there, and the last array we built was just with the four-terabyte ones because that's big enough for the VMs we store. Matter of fact, we're not even using all of it, as you've seen when you look at the line here, and this fully runs everything, plus we have a lab that's not full either. So people ask, why did you go bigger? For what? I don't store that much data. Matter of fact, most of our VMs are actually quite small. So anyway, as well as babbling about that, this is completed. We're going to go over here and take a look at, whoops, let's go to YouTube. There's our YouTube demo. We're going to start it up, and now it's on the production disk, which are a little bit faster, which I'm curious because I think I've seen, and I'll roll back in a video, but I think we've seen like 450 or so on the other VM. I don't know if I can pull that up. By the way, this software is really fast. I love it. So that's part of the other reason I like using it as a web interface. It just goes. Okay, stats go away. So I thought when you power off the machine, I'll put power back on. But this is booted up. Probably has the same IP address. Now we'll just check. Yep, look at that. Up arrow a couple of times and we'll run that same test. Nope, I feel like saving results. Maybe one day I'll sit down and just play with benchmarks all day. Problem is I get aggravated because of how long they take. I mean, you can script it, but then I got to organize all that and put all the data, the results, and yeah, I'm less interested in benchmarks. I like production machines. I like working in thinking about stuff, but the benchmarks are kind of something fun to do to show that it works fast. So actually, here's what's fun. The production ones are a lot faster. Here's what when it was booting, we hit 817. So here's the first test and it's going to go back for it. Let this run for a second. One of the things you can do when you're looking at the storage pools, production stats, you can look at the storage itself. The IO wait time, the IOPS, what it's hitting in terms of that. And this is where you're going to get two different stats. Let me explain why. This is looking at the realistic storage on all this versus the other one is looking at the storage as the VM sees it. So because there's a lot of caching and different layers going on in between, you're going to get that. Now, let's go another step further and pull up net data. This is net data running on my free NAS. So here's where we just started that test. There's going to be some spikes in here for this is us moving a drive. Nothing that's loading it up. I mean, realistically, what we see, 36% CPU usage in my free NAS, by the way. We'll get the stats on it. So we always want to know what the specs are on it. System. It is 16. I need to add more RAM to it. It's on my to-do list. Free NAS 11.1 U5 with an Intel Core i5 at 3.2 gigahertz. Certainly nothing actually to a model number real quick. i5-4570. Older processor, older system and not under high stress with doing a full benchmark on the production side. The thing is that this system is just not even breaking a sweat here. And you can look at the ZFS file system. Here's those rewrite caching. So because we have layers of caching, layers of efficiency, it's highly efficient here at caching because the green represents the cache hits. But inefficient when you're doing a bunch of random read with benchmarks. That's what you should do. You want to exhaust the cache because until you start exhausting the cache you don't get any real performance statistics because not everything's going to be cached. But on the other side of that ZFS overall and I think it lets me is it control? Nope. Wrong one. There is a way you can... There we go. We can zoom out. Most of the time this is scrolling out over time. There's actually a lot of cache hits from just the day-to-day running. This is where we loaded an update. But when you're doing the day-to-day running there's actually a lot of cache hits that you get. I don't have the whole time frame for how long this is. But these... What the cache hits are going to do... Okay, this is in the last so many hours. This allows the system to be really efficient. So that's why I said benchmarks can be a little bit tricky and let's figure out if they're done yet. So here's what it sees on this side of I08. Here's what the thorough put was as far as the production drive seen here. So writes at 377... I'm sorry. Yeah, right. Reads at 376. Writes at 537. Really weird that the writes are so high. Maybe it's just the way this benchmark runs. Oh, and you can have multiple windows of this open. So we'll go here. Stats. Here's the stats for this. There's the peak here. And the same thing. 537 on the writes and 375 on the read. Pretty fast drive. Like I said, this is just a ZFS2. Four drives, not SSD. So someone asked me why I don't have SSD on all these. That's why. I mean, this performance is adequate. Because by the way, I didn't shut down. I didn't isolate this because if you really want to do a benchmark, I need to shut down all my production stuff that are all doing stuff right now and all running on here, especially windows. And if you want to run for full performance, I need to shut down everything and only run one VM, run the performance and do the test on there. So that being said, I'm not going to do that beyond the scope of this. But I want to give you guys an idea of what I'm running in here, how fast it is to simply clone, migrate, restart or do something with these VMs. And like I said, it's fast. It's really fast. We'll do the snapshot thing real quick. So let's do something real quick. Benchmark is still running. Cancel it. So here's these files. And we'll... New snapshot. Snapshots created. RM star. Whoops. Yeah. RM-RF star. Hey, why not really break it? RM-R... RM-RF star. Cannot do... Research is busy. I don't know. I have now broken this VM badly. LS can't be found. Come on. Can I change directory? No root. No. We have truly broken. Okay. We broke it. It's completely broke. How fast will it do a restore? Will exit work? Okay. Exit work. Can you SSH back in? Yeah. I can actually refuse. The services are all broken. Yep. This VM is now we, you know, oops it. RM-RF did. Let's roll it back. So we're going to go over here and show you how fast we can revert VM to snapshot. Now it will, if you want, fork this and have another snapshot before. I don't need a copy of that mess. So we're just going to hit OK. We watch it turn yellow just for a second while it spins and does its magic. And it's already restored and booting back up. Four, three, two, one. Let's count this. It boots back up and it's ready. I know once it gets to this screen it's just starting the services. So I was able to get one sip of coffee in here. Oh look, everything's not deleted. So if you're wondering about just how well the system performs, it performs that well. And part of the other advantage of having this on ZFS is I can snapshot the ZFS as well and also have that aperture layer of redundancy in case I needed to restore all the VMs at once for some unknown reason. I do keep a snapshot but only for a couple days. Just on the off chance there's something so horrible happens that I have to restore that. Never has happened but just the thoughts there. That's kind of it for the video of the tour of how my lab is set up and what we're using. It's kind of the combination lab production but it's where a lot of these VMs live that we're doing videos on. Maybe tomorrow I'll get to the free NAS with Active Directory and Windows integration. A lot of people have asked me for it so I want to do a video on it. They've come so far with free NAS that it integrates really well. That's that video I was talking about that I'll get to later but this is the lab by which I set it up and as you notice I don't spend a lot of time waiting for things to happen. It's just now. Everything's very fast. I can open up multiple windows so I can see multiple things. I can move VMs around, migrate them wherever I want so this is the migration like I said and also when I'm done with these like this YouTube demo one we're going to delete it and ask for one of the videos. Let's go over here. I'm going to hit remove. Are you sure you want to delete all VM disks? Real quick, this is something really neat. If you try to delete multiple things and then we go to more enter the following text delete three VMs. Oh, it won't let me copy pasted either. That's great. I can copy pasted this but they purposely don't want... They have some safety for not wanting things to be done so you can't overly do things and by the way if you didn't notice when you're doing this like if I do production I can do... I can select them and perform a task. I can stop all of them start all of them, reboot all of them, migrate all of them, copy all of them, suspend, force, reboot all of them or snapshot all of them once. This is why Zen Orchestra was really impressive for managing things at scale because I can from a web interface go do this, start these or stop these and if you stop this many VMs or start this many VMs it has a confirmation. Are you sure you want to start all those or start up all those? This makes it really easy because if you want to do a backup you can select all of them or not select all of them just to make it very easy to start, stop, and group and you can use it by a tag, a filter Debian, whoop they got spelled Debian right and find all these or type the word lab and just so I'm clear how these got the tag lab in them I can call them Tom there's the Tom label this one has the tag Tom you can, it's all filtered and expressions right there if you want to remove that there we go it's gone so like I said very powerful tool play around with it it's really really neat and has all kinds of options I saved lab and production in here but you can save more really slick system I'm going to cover this separately at some point the backup backup NG I'm still learning how it all works so I want to get better at it settings servers and all that these are pretty straight forward jobs but does a good overview of it definitely enough to get you started with it it's free to load free to download I'll leave links where to get all this the details of the setup I've covered before but if you look at my Citrix video just replace anywhere I say Citrix with XCPNG pretty much that works it's the install process the same and if you someone who followed my Citrix videos and you're wondering we now results may vary be careful backup backup backup before we loaded on top of Citrix and got all these working everywhere so all the places all three servers I overwrote on the top and everything just worked perfectly fine so that's how we got to this and it works fine thanks for watching if you like this video go ahead and click the thumbs up leave us some feedback below to let us know any details what you like and didn't like as well because we love hearing the feedback or if you just want to say thanks leave a comment if you wanted to be notified of new videos as they come out go ahead and hit the subscribe and the bell icon that lets YouTube know that you're interested in notifications hopefully they send them as we've learned with YouTube anyways if you want to contract us for consulting services you go ahead and hit launch systems.com and you can reach out to us for all the projects that we can do and help you we work with a lot of small businesses IT companies even some large companies and you can farm different to us or just hire us as a consultant to help design your network also if you want to help the channel in other ways we have a Patreon we have affiliate links you'll find them in the description you'll also find recommendations to other affiliate links and things you can sign up for on launch systems.com once again thanks for watching and I'll see you in the next video