 and we are live. As soon as I started Livestream, something always changes. It started raining and the rain started hitting the window and I'm like, ah, had to run and close the window. That's not what this is about at all. That problem solved. This is me doing a Livestream on Zen Orchestra 572. Part of the reason I want to do a Livestream was to actually demo it live so people can't claim. Well, I guess they still could claim that I'm doing prerecorded demo, but I'm not. But if you do live, you can see how it works live. And I think that's more interesting or at least it is to me. So let me get this shared. So we first want to talk about this and any questions I answer on Livestream are going to be strictly related to Zen Orchestra 572. I plan to keep this on topic and try to keep it somewhat brief. And the changes they're making in a couple and specifically the one I want to demo but we'll talk about both is going to be the faster backup merge. And this is just a blog post that they have. So easy enough to look up the blog post. I throw a link in the description below for this blog post but the faster backup merge is really cool. I've been doing testing all morning with it to make sure it worked right and it did. So it was great. And I want to caveat it with this part right here is I like that they took the time to write this up and make sure people read this. Enable with caution. Some files aren't able to support that many files. What they did was refactor this. Well, it's not completely a refactoring I should say because this is something they already had support for with their S3 but now they're opening up support for it when they're doing their file target backups. It breaks up the system to be a bunch or the backup, the Delta backup specifically to be a ton of little files instead of big files. What that means is if you have a lot of backups you end up with thousands and thousands of files. That's fine, depending on the target file system and where you're storing all those backups. These are those little nuances and one of the reasons I love ZFS and ButterFS supports this too. So I will give a shout out to them because they do mention that here. If the backup target is either ZFS or ButterFS you're fine but EXT4 only supports 4 billion files max. And because a lot of the consulting we do and people that are using this in the enterprise space definitely have enough backups and enough revisions of their systems in Delta that this could be a problem. It's worth noting before you end up with a problem at that scale that you make sure you have the proper storage target. It may not affect people in the homelab but it's worth mentioning if you have a lot of backups. Maybe there are posters gonna be some homelab people with quite a few different backups of things. They're great for experiments and iterations that you're doing. Now there's been some updates to the XO proxy that's cool but this is the next one I think is just super cool especially for me in that storage maintenance mode. Need to shut down your storage for maintenance? Now you have a button for that. Now, if you're like me and you have demos you'd like to do of different storage pools or you have a lot of them attached at once for different reasons and especially for maintenance reasons being able to move VMs back and forth or just, you know, if the VMs can be stopped while you do some type of update to your storage server like reloading TrueNAS core to TrueNAS scale and you don't wanna go through and shut down all the VMs that are related to a storage because it can be a little tricky to figure out when you have lots of VMs. You can just go in storage maintenance button, press it and it finds all the VMs attached to that particular storage and shuts them down for you. Simple as that. And then you update your storage, start some all back up. They are, and this is not something I'll really cover in depth but they're really pushing forward on the REST API token generation making it easier to build these that way you can do things with the API directly for people creating automation. They've got documentation on that and some update to the way you can view tasks for non-admins. One day I'll do a video on the ACLs for this and they have another blog post it links to because yes, XO Lite, which will be super exciting for me to talk about for people who think it's hard to get started with Zen Orchestra and XEPNG because of the, you know, having a VM running on there XO Lite will be part of an interface directly in, well, you can directly run it with XEPNG without having to load a VM. It's a little bit different how it works. They get a cool implementation I'll be covering that once it becomes more available. And Zen Orchestra 6 is gonna have some big facelift that goes with this as well. They're doing a really updated UI. So that covers some of the things but now let's get into the demo itself because I know that's where things are definitely gonna be more exciting, right? Let's see. So I'll answer a couple questions real quick though and I'll try to answer more of them at the end. Have you ever seen issues with the health check feature? I used to tag check only one VM and it never kicks off the check. I haven't had any issues. I did a demo on it. It works great. So I haven't had any problems with the health check issue. Not really a problem there. Do check it to make sure if you, you have to make sure that VM boots. So go through your own restore process and check it to make sure it boots on there. But if we got time, we'll play with that too because I have a hard stop at in about 20 minutes. So about 20, 25 minutes. So let's go over to the backups because we already ran it and I wanna show what it looks like. Well, maybe we should start with the remotes first to describe the setup of this. When you're doing the backups, we're gonna go over here to settings, remotes. Here are the different backup targets that we have. And with the backup targets, here's my, I chose to use Synology at random instead of a TrueDance. I just figured why not spread the love around because I don't always do Synology but it works perfectly fine with Synology. This is all done with SMB. So you can see it's the same IP address. Let me zoom in a little more just to make it big and make sure everyone can see this. This IP is the same. The only thing different is the folders they're going to. So the first one is going to what I'm calling normal backups. So if we click edit on the normal backups, go down to the bottom here, we can see this bottom option here is unchecked. Store backup is multiple data blocks VHD, unchecked on the normal backups. Go back up here and we're gonna click on this one. This is the one called small blocks. We'll edit that. And with the small blocks, you see it is checked. That's the difference between these two different storage targets. And then I also created two backup jobs. So you go to the backups, zoom out a little bit. So readable. I did a Delta backup normal demo, Delta backup small block demo. Really simple in the way I did these. They backed up the same VM. So each time they're doing the same VM. So everything is the same as I can make it. It's going to the same storage target and it's going to the same VM as being backed up. I use a, just a really simple script with FIO to create some random noise and keep doing the deltas. Absolutely is same as I can make everything. And this is the results of that. We're gonna go over here to the, look up the results of the backup. Now, because I'm doing random generation or as a slight difference in the amount of data. So 20 gigs, 20.9 versus 19.67. And I can't tell if this is a difference in a way it generated or if it is just the way that the small block transfers there's less transfer going on. Nonetheless, in doing these backups going back and forth, the difference you can see, especially right here and we'll zoom a little bit more into these last two. Six minutes and 10 seconds on the normal one down to three minutes, well, two minutes and 45. So close to half the time for that backup. That's really impressive that I was just like, it works that well, like you're doing this and it's a substantial savings on there. Oh, someone mentioned this and I will, so I can show people really quick. I am using the unsupported version and I bring that up because I did this completely with the self-compiled version because I know a lot of people are home lab and they're wanting to know, can I use this myself, Tom? Absolutely, everything I'm doing here is not the licensed version. This is not the paid support version. Our business clients, we always encourage to buy this version, the full version of it and go ahead and get automated service delivery and support package and everything else. So I think it's great and we have a lot of business clients that buy it, but I know I have a lot of home lab people that go, I wanna try these cool things out. And yes, this is all just built from sources. That's why if you go to click here, it says no support. So this is all being done with everything that you have access to. So that's one of the clarifications I wanted to add, should add that right at the beginning. But nonetheless, go back over here and here are those backups that I did. You can see the two backup jobs attached to this. It's really that simple for doing the backups. I'm just impressed with how much faster that Delta Block demo is. So almost double in speed, I'll take that as a win. Now, as I said, you do have to have the concern of the storage targets, but that's pretty easy. Something else of note, and let's go ahead and look at it inside the Synology, because I think that might be interesting as well, is let's log into the Synology. I think I have it open anywhere else, do I? Nope, all right. Go to file station. And this is what that looks like in there. So there's the normal one, VM backups. I think it's this one. Nope, it's the other one. Figured I clicked on the wrong one. There we go. There's the VDIs. There's your two huge files right there. And so 24 gigs worth of data and 20 gigs worth of data there. If we go back over to the small block one in Synology, VDIs, data, blocks, and then they're all broke down into folders. And then each one of these folders has all these little files in there, just incrementally numbered. What this does when you're doing deltas because you're taking differentials and all the blocks are in pieces. So instead of having to truncate a file, they can just take and throw out the blocks they need and put them where they need to be. So as the block kind of moves because there's delta changes they're tracking, this makes it a lot easier just to do that, which is what gives you that speed increase. So other than having a lot more tiny little files, but for Synology, that's not a problem at all. I already know for TrueNAS, this isn't a problem at all. So using these as just a standard SMB backup target, not a problem. My only other thing that maybe I'll test in the future is I used SMB for this. If you were to use NFS, it might go faster, but I don't know because it kind of comes down to some nuances with SMB. SMB can choke a little bit when there's a tiny file rights, when there's too many small files, you can get SMB having a little bit more of a struggle than you would get with something like, for example, I SCSI being able to do this. But either way, that is really cool that you can do this and break it up into small blocks. And yeah, it's just really cool the way that works. Now, go back over here to Zen Orchestra, and we'll go over to the fact that I have, where is it at? Probably this one right here. So this is running on TrueNAS LL Pool J. So there's only one VM running on here, just this one right here. And then we want to do maintenance on my TrueNAS LL Pool J. This is the same system I was using for the previous demos. So we can enable maintenance mode and it finds the VMs on there and says in order to put this SR in maintenance mode, the following VM when we shut down, are you sure you want to continue? Hit okay. And it will shut down that VM and then put this in maintenance mode so we can reboot and update our TrueNAS instance that this one is and then take it out of maintenance modes, fire up all the VMs again. I'm easy, I could just migrate them as well, that's another option on there. But I like this for, you know, it's a simplicity of it. Just throw this in maintenance mode, schedule your maintenance and the disks are still there. They're just in off mode. So they're shut down. And when we want to take it out of maintenance mode, just go over here and disable maintenance mode. It reconnects it to the pool and all the resources are available again. And that's it, that's simple. It doesn't auto, I don't think it does at least, does it? Oh, it did, it auto started the VMs too. So it puts them back in the running state there and thought it did, I wanted to make sure. Now I can confirm, it definitely does that. This is the fun thing about live demos, we get to watch it being done live. So, let's see here, let's see, does it support OAuth 2? Let's look, if we go into settings and we look at plugins, we have auth Google, auth SAML, auth LDAP, and I don't see OAuth 2 in there. So, I don't see that one as an option. So Google, GitHub, LDAP, SAML are supported for the auth. So, hope that answers that question. What other questions do we have about this? Hey, hello, started using PFSense and XPG because of me, thank you very much. You talked about Excel saving last videos, it does not seem to be working in XOA, CE, and paid at this time, anybody experienced an SOSan? No, I haven't really spent any time with ExcelSan, I don't have anybody using it right now. Well, I might, I can't, we haven't consulted on it, I'll say that. So we sometimes will consult on a scope, a narrow scope of a project, but we have not done anything with ExcelSan. I'm waiting to come out with V2 before I come out with it. The way storage works is so simple in here, I don't always need like the extra, you know, everyone wants the replicated features which are some challenges when you start trying to replicate storage everywhere like that, but you can also build your own SAN and tie this with CEF because CEF is part of the supported backend on this. So I guess it kind of depends on the goals you're trying to do there. Just joined, is there inherent downside to doing small blocks locked to XOA unless converting it? Well, your backups can't, I mean, technically, and this is a fair point here. Tactically, you could extract the backups from Zen Orchestra when they're in VHD format because you could just grab the VHDs and convert them. That's true. And you wouldn't be able to easily create and rebuild these, but seen as if you back things up with Zen Orchestra, you would be restoring them with Zen Orchestra. So I don't think it's a big deal, but yeah, obviously, I wouldn't really call it lock-in because if you're using the fully open source version of Zen Orchestra like I am, you're not locked into anything. You can just go ahead and go through, back it up and restore it. Matter of fact, we'll go ahead and stop and delete this VM. They both seem to restore about the same when doing both of them. So let's go ahead and remove this and then we'll do the small block restores. If we go back over here to backups, restore. Backup list, demo, and let's restore this somewhere. So lab stenology normal, lab stenology small block. So I don't know if it restores any faster, so I'm curious. We'll copy it to the local storage, lab pool local. I'm trying to think of the fastest place to put this. And we'll stick it on Trinity. So lab pool, Trinity. So this is the restore 1.2 lab demo off of the small block storage. Hit okay. Actually I already did one, it looks like here. Successful. How long it take? Nine minutes, eight minutes and 51 seconds. So I probably should do another one. But I gotta wait till that one's done now. So we gotta see if it finishes in time before Tom's live stream. I think we will do it. So as soon as that's done, yeah, we'll try that. But I don't think it's really much of a lock-in, per se. I see a lot of people mentioning, yeah, I'm assuming you have run into the experience where you have tons of small block writes that cause any of that. So with Windows VMs, one gig links and 10 gig links are reported only as 100 gig speed. Results vary using PB driver-ish Windows update. I don't, I've never really messed much. Don't really mess much with that in Windows. But let me look at some of the Windows ones that are on here. What does it report? Not that one. Where's Kyle's system? Probably, there we go, this is the one that's Kyle's. Let's start this on the, where's the disk live? Okay, it'll start here. Let's see what it reports. I think he's got all the drivers loaded. So we'll look at the reporting in there. So let that one boot up. Your client's asking about HCI. Not really. You know who asks about hyperconverge? It's everyone's favorite buzzword. Home lab people ask me about it in Twitter and comments and things like that. But I don't get many, not in here as many business people ask about it. Let's see, what does the networking report? What does it show? Excuse me, cause I'm Windows dumb. Where do you get to the little status thing? Is it here? There you go. So that's, yep, and it's supporting that. So that's 1,000 megs. Is that right? I know that's what it's reporting, but I believe the, do I, I don't know if I have a IPERF installed on this to test it inside of Windows. I don't think we need it on. So I'll go ahead and shut it back off. Yeah, they don't report the, and it doesn't matter really what you're reporting from a link speed. You can go, in matter of fact, it's kind of like an artificial reporting on there. I don't think it really matters what it thinks the link speed is. It's not necessarily where it is. So you're grateful I make the XCPAG and XOA videos. Awesome. Yeah, that's the, a lot of fun I have doing it. It's my, you know, one of choice. And it's weird because what people do is they, and someone was posting in my forums about this and hopefully someone answers their question, but they were having a bunch of trouble with Proxbox and couldn't get NFS going. And I know that Proxbox has good support for NFS, but their complaining is not enough videos on it. And people bug me to make the videos. I'm like, I use XCPAG. I don't plan to take the time to learn and troubleshoot a whole nother hypervisor, especially because we just don't use it. You know, I don't have any problems with the other ones. I don't use whatever makes you happy. But if you can't find support for things, then that kind of becomes a challenge. The other thing is I've said before is we have a lot of businesses using this at scale. And that's a big factor when it comes to, you know, your XOA system is all the different things you can do with all the large VMs and a large quantity of VMs in there. See if our restore job is finished doing its thing. Still going. So I realized I already ran one here. So we'd have to run another one, but I don't know if I have time. So nine minutes to restore from the Delta small block demo, how fast will it restore from the other one? When did we start it? So we started at 532 and it's 537. So it's only been five minutes. I'll assume it's going to take nine minutes again. So I'm not sure which one's faster to restore. Part of it too is I don't know how fast the Synology is either. So this is something I never use. I know a lot of people ask about pass through. I never really use it. So it hasn't been a big issue. I believe, you know, I don't, I think Wendell covered his video getting passed through working for his forbidden router on level and text. But I'd have to look, I think they have the write up in the forums that have the details of it. I mean, I tested it once a while ago just to see if it worked and it did, but I didn't really do a video on it. So the, what do you call it? The documentation is kind of basic on how to get pass through working. If my problem is, and the reason I don't have a good pass through video is I just don't use it. So I have to like take the time to learn something I just don't use. And it's not something we use in the business world at all. I mean, pass through isn't used zero in the business world, but it's not used a lot other than certain exceptions of certain cards or whatever. But so, yeah, that's, there's a, oh, I just to check, there's no USB pass through in Xenorxia UI, right? That's a CLI as well. Yeah, same thing. Same answer here. Have you done USB pass through Xenorxia and any good resources for this? It's like anything, when you wanna do any of the pass through and believe they have a document on it. So we'll pull it up real quick together and talk about this. Here's how to do it. There's no UI option that I'm aware of. It's basically going through finding your devices, you tell the DOM zero, you can't have control of these devices. So you set them to be not used by DOM zero and then you can use them, assign them through, whether it's, you know, pass through, NVIDIA beat GPU, MX GPU, USB pass through. This one's a little bit different. There's no need for alternative files manually as the older guys suggest it's fairly easy. Using Xe CLI first use, Publist to see physical USB. So there is a tool called Xe PUSB list that will allow you to do this. So you can choose things in pass through. I don't really use it. This is something I know at least some people are using. Some tools need those little USB dongles to work. So I don't really have myself and most of the consulting we've done have not done it. So, but yeah, they do have good documentation on that. Has anyone run Xo on a workstation to manage a hypervisor? Yes, I've known people, I've known a few people who spun it up locally on their virtual box on their machine to manage their Xe P and G machines. So, cause you don't, you can turn off Zen Orchestra and the VMs will all keep running. You just lose the ability to manage them. But then again, if you're doing, if you were, you know, SSHing into your host itself, you can do some of it from the command line there to manage it. So yeah, but it's not too hard to do in there. Do you believe the adoption will dramatically increase the broadcast purchase of VMware? Oh, we already, we've already been seeing it since, the first bump was when VMware raised all their prices. That's the first step. That was like what a year, year and a half ago now. What they, they did that big price bump and that right away got, that's what kicked off like a round of consulting. But yeah, the definitely the Broadcom stuff has not made anyone who's using VMware any happier. So anytime you set up pasture, you tie the VM to the host so you can't migrate unless you have multiple devices. So frequently this makes sense, dedicated server. This is why we don't see it in a business use case because and even for us, we have four hosts to be able to take a VM. We started them up on different hosts. We, if you pass something through, especially if you know, like a GPU, if you don't have exactly the same GPU on the other devices that are in the pool, then you won't have the ability to start them up over there. Now, there are commercial GPUs that support pass through. So that is a thing, because if you go here to the host under advanced, there are going to be to, where does it show, GPUs. There's certain ones that are like officially supported that can be, there's ways to get pass through to work on there, but those are expensive. Those are not your average ones. Yeah, you cannot migrate with a pass through device attached. Pinyin, ZFS for VM disks, mirrored Z with L2R, question-based most on Rust. The L2R, if you have spinning Rust, you are just not going to have a ton of performance unless you have the L2R on something really fast. So, but I've not had a problem, even ZFS on spinning Rust is still reasonably fast. As a matter of fact, the pool that is referred to here, the storage I should say, not pool, we still get some pretty good performance out of this system here, and this is all spinning Rust. It's just, it's a lot of it. That's how there's so much storage available in here. We've got 239 terabytes still free, only 1.3 terabytes used right now. There's 30 on the lab and how many is on the other ones. Let's go back, back. This one has another 27 disks on it. So yeah, 27 disks on this one, but they're spinning Rust. If you have a bunch of them, you still get performance. There's no special devices on here. This is all running straight. So you can get decent performance if you have reasonably fast drives on there. Do you collect your client infrastructure to XOH your office? Not for us, not for what we do. We have clients that we consult with, but we don't directly manage. We have a lot of co-managed clients. They have their XOA maybe, and they may have multiple locations. So their XO instance manages their locations. So they're doing it like that, but we're not managing through ours their stuff. Have you ever migrated a whole infrastructure, all VMs and have it went, my whole infrastructure too? Oh yeah. Yeah, we've had a lot of them. Matter of fact, we're actually looking forward to, we have a server upgrade to do for a client. So as our new server comes in, we just load XCPNG on it, join it to their pool, and then we can migrate all the stuff off their old servers. Like that, it makes it really simple to do. Question is more ZFS mirror or RAID-Z? RAID-Z is gonna be more performance. So go like a RAID-Z with three or four drives. So mirrors is just mirror. You're not really getting any, you're not getting any performance boost out of a mirror. For Co-Bedge IT, have you read the book by Bob? Nope, I've never read Bob Co-Bedge book. So any more XCPNG questions? I guess that's the, I have five more minutes before I have to leave. So the next five minutes, what are XCPNG conditions we have? Hyper-V to XCPNG migration? Yeah, a migration from anything? We'll just use the backup utilities and restore it. That's a, here's a, let me see if I can find one of them that might be in here. So a lot of them, we're doing disaster recovery. We were even testing some stuff. Lab test restore, even for us, we take sometimes when we're doing our DR testing, we can take a client environment and just restore it in here. And even they're not running, by the way, these are bare metal servers that we just restored into here. Whatever backup utility you want to use, we were testing for this particular demo, we were actually testing MSP360. But a lot of your backup software will easily do that. If you have a full image backup software, pretty much easy to do. CloneZilla is a free one, but there's also things like a Cronus. There's any of your cloning software will usually do it as well. Matter of fact, if you have a new XCPNG server and you have some other hypervisor virtualization system attached on the same network, that's really easy to do because you start the receiving part on the XCPNG and you start the sending part on the outgoing hypervisor and they're on the same network and they just transfer all the data from point A to point B. So I've gone from any of these, from XCPNG to, or from ESXI to XCPNG, HyperV to XCPNG, yeah. A lot of people spend a lot of time trying to figure out how to do, and there's ways to do it, yeah, convert the file types, like convert the format of the file to import it in. There's ways to do that. That sometimes is the harder way though. Just doing the cloning software was pretty simple. When you make a new VM in a pool, it always defaults to the master host by doing something to pick which host pool to use. Well, yeah, if I, new VM, you choose where I want it to be. I have two of them here, so lab pool or pool of Zen. And in each of these, I have a video about how to join these, these two are in a pool together, these two are in a pool together. So my pool of Zen, we can, you know, is these two servers versus there's a lab pool are these ones here. So they're pooled together, but if I were to eject this out of the pool, it becomes its own master of a pool of one. Everything starts out as a pool of one, you merge them all together in there. It's how you put them together in a resource pool. Was the highest frequency back, have you done a minute frequency? No, I wouldn't recommend that. I don't recommend minute, a minute level frequency. That's just a bit much. Is this open source? Yes, everything I'm doing in this demo here and all these setups, definitely all open source, completely available for you to do. Let's go see, let's pause for a second and look at this restore process. How long did it take? Eight minutes to restore that. I don't know, we don't have another eight minutes to run that again. That restore took a little while. Let's see, what else do we have here? But when you're doing the backups, you can set them to the minute if you wanted to. I don't really recommend it though. It would be a lot of burden on the systems. This is, you go back to that topic of hyperconverge. When you try to synchronize storage between like two separate storage targets, the interconnect between those storage, the fact that you have to write to both of them all the time means you have to have the infrastructure that's capable of supporting it. You're only as fast as the weakest link between there. And that's the same thing. I mean, we have a continuous replication option where we can target two different storages, but now you're creating that bandwidth to tie those together, that's in use every time that storage occurs. This can cause some challenging a bit. So, we just do a back of the machine with the Cronus and just restore. Yeah, that's really as simple as that. So, clonezilla will work as well. That's an easy way to do it. I guess migration proxmox for VMware is just a piece of cake. Yeah, it's really not a big deal. XCPNG would not allow me to join unlike hardware just saying pool with HP and Gen7. I don't, it should let you join. And the reason I say that I don't, I'd have to understand better or post in the forums with the error messages because you may notice right here, this is an R630 that's running a Intel processor. And this is a Ryzen processor. I cannot live migrate between these because they are incompatible for doing so. So, there's no way to do a live migration between an AMD. So, if it boots on AMD, I can't live migrate it to the Intel and vice versa. So, that does stop working when you have dissimilar hardware, but it should let you put them in the pool. One thing it will do though, when you put things in a pool, you end up with, if you have generation difference processors, it will reduce down to the lowest common denominator of the pool. So, if you have three processors that have, you know, this feature set or three hosts with this feature set and you go and add one that doesn't have those feature sets, that feature set becomes basically locked out and it has to downgrade the pool to the lowest common denominator. So, yeah, do a system backup one or two hours, like a SQL or application backup every 15, 30 minutes. Yeah, and here's the thing. For us, we're only backing up the VMs, you know, every few hours, that's, we haven't kind of gapped out because that way I can do a full VM restore, database transactions because of the way they work, setting up so all of our critical data and just the database, which is actually not that many rights, you can set it up so you're doing, like that's a better strategy. Have your whole VM so you can restore it, but where's your critical data? Well, for us, for example, it's going to be all loaded up into a database. Well, that database just synchronized the differential changes out of the database, whether you're setting up a cluster of SQL servers where they take the rights and then send all the change commits as they happen to a secondary SQL, that's a better backup in terms of not, because why would you back up an entire operating system every couple of minutes? You know, and I get the critical nature of things, but take the data and only the differential data changes and back them up and that'll be a little bit faster. So that definitely helps a lot. Have you seen a speed difference between NFS on ESN, NFS-19G? I don't know, I've not done any direct testing. My understanding from what I've read in forums is there's not really any big difference, but I don't know. I have not tested that. I know what someone said on Reddit. So what someone said on Reddit may not be 100% true, obviously. Let me get rid of this VM here. Oh, let's fire this up and we'll actually run that backup. I can show you, I guess we can run one in action and see the change different initials that are on there. Good, it started on the Ryzen, do I have it? Yeah, okay. These things boot, I love our Ryzen systems, and they boot really fast. I'm building some more lab stuff that's gonna be XCPNG labs and we're gonna build them all on Ryzens. Are the backups compressed? Depends on the backups. Some Delta backups, I don't think are compressed because, yeah, they wouldn't be, but you set compression on your ZFS side so you can do it that way. Okay, there's not much on here. But we'll kick off one of these, here's that Delta backup. So five hours ago is when I was doing these tests and we'll go ahead and run the backup. Do the small block backup here. Let that one run, started. Takes an extra snapshot. Go over to the backups and watch the process run. It's gonna cool away all this update. Can you designate the NIC to connect to your storage devices? You do it by building a storage subnet and when you build the storage subnet, everything's on there in that storage. So it's not as much dedicating a NIC to it, but you'll assign that IP range of your storage and that subnet and then everything will communicate on there. And you can, for example, don't put a gateway on the subnet because, well, storage shouldn't be routed that causes some inefficiency. So you put all the storage devices on one and then it'll use that for all the storage. So essentially, yes. Do you face a real XCP? I'm assuming you mean XCPNG crash or so, how do you manage it? This is actually a good question and this is how we manage it per se. When you go to the backups or the restores, this is all automated. Well, I only really automate the one pool, the lab pool. I don't care. I should probably set the backups to get on the lab pool. They're not running, obviously. But you can do the metadata backups inside of here and this is all the different backups. This is the automated one right here that backs up XO. This allows you to instantly restore any of the metadata. So the hosts don't matter because people ask about how do I back up the hosts? I'm like, it don't matter. Only the data on the matters. So each of these hosts, and for example, I think we, let me look over on, oh, where's one of my lab systems would be? Probably under VMs, DB and 11 lab, pool of Zen. Yeah, this one should live here. I can start it. So this is my Debian lab server. We'll fire it up real quick. Sir, any advantage over Hyper-V? And yes, the first one of it, it's not built on Windows. So it's not Hyper-V. Yeah, way more flexible integrated backups. So definitely, yeah. Our Delta backups migrate into a new full backup and full backups carrier from Delta. No, they don't need to be that way. You can take all your deltas and you can restore any version of the Delta. So actually, while that's booting up, let's go back over to the backups, overview. So backup successful. And if I wanted to restore any of these, so let's go over here. It makes a full version. So I can restore any one of these right here that I want. Matter of fact, let's go over here. Here's some of my production stuff. If I wanted to restore the Delta, I have full backups and I have partial backups. This is where I can restore them. I can just go here, hit restore, choose the version of the full one or restore, choose the version of these. So really, yeah, it's extremely simple to do, whichever one I like. These are some of the older backups. It's zoomed in too fast. I think it's something that has a bunch of revisions in here that I think you have a lot of deltas. Some of them are gonna be older deltas. Now, Windows page two. There's three deltas of this. We haven't updated this one a little while. Well, March 2022. But yeah, any one of them, it will restore the full version. So when you're going to restore, even though it's a Delta, it'll restore the entirety of it when you're doing it. So it's kind of the same. My wife has joined to let me know it's time to leave. Why do you prefer this over Unrayed? Well, that's easy. It just has way more features. I don't really have a top three reasons on there. But thanks everyone for joining. And oh, last little thing before my wife interrupted. I'll show this and then we're gonna leave. So this right here, it's currently running on the R630. And this was seeing how the hosts don't matter because the storage is a shared storage. So if we wanted this on a different host or if the host, if the R630 died, how do you get it somewhere else? Well, you just move it over to the other host. So we can actually just take it, migrate to the 720 host and hit okay, and watch everything go wrong because it's a live demo. No, it shouldn't. So there it is migrating. So it started, it was running on there. And all right, now it's on the other server. You'll watch it switch here. It's yellow because it's in switch mode. There you go. Now it's moved to another server. So because you can always restart these on another server, the host, or another host I should say, the hosts don't really matter. If one of them were to break, if you have more than one host in a pool, if I didn't have any host in a pool, I could just go to one of the backups and restore it. So yeah, wouldn't be, really wouldn't be a big deal if one of the hosts died. And if everything exploded, so to speak, and destroyed, then we have the full backups of all the metadata that was in the host so we can rebuild those as well. So what's the most cores and RAM you have live migrated and how long did it take? I don't, I could assign more and more RAM. It wouldn't take much longer to do. So if we bump this up to some higher number, it's just a matter of how long, how fast the connection is. I mean, it really, it's all about the performance of the system on there. So tears, you can have your husband back. Yeah, so yeah, you can tell she's in, my wife isn't here saying it's time to go. So nonetheless, this is my live demo of it. Leave some comments down below if you want me to live demo other stuff. I thought about doing some more live streams because they're kind of fun. I like showing people things in real time. There's no video magic hiding how any of this works or if anything goes wrong. But you know, I like teaching people all this stuff. So it's actually been a lot of fun. And thanks for watching. Check out the blog post link down below and start playing with XCPNG. I've got plenty of tutorials on it to get you started. Thanks.