 Tom here from Warren Systems and we're going to talk about Zen Orchestra and backups. But I want to dive into it first from laying out the land and some of the nomenclature for how they label things in here. And this is something that may have created a little bit of confusion from some of the other videos, but this is a good place to start if you're not familiar at all with how XCPNG works or how Zen Orchestra works or why there's two separate names. So they're both supported by the same development company and they're both open source. They both have paid versions. They both have paid supports. And I say paid versions, but they're really just paid support packages for them. Everything I'm going to show you today is 100% stuff you can compile yourself with open source. And I use that version that way. This is anyone who wants to use this in their home lab or start building this out. They absolutely have access to everything I have shown you here today. And I'll leave links because I have instructions on how to build these VMs. So the hosts and pools are over here. The XCPNG host is a single device. But every device, whether it's single or multiple, belongs to a pool. It's just a lonely pool by itself if there's only one. Then we have like a three pool here and then we have a larger pool down here. Pulling the resources together is great when you want to, you know, automatically shuffle VMs between machines and everything else. You can all be in a resource pool, but Zen Orchestra. This is a separate, generally running on a virtual machine within one of these. It does not have to run on all of them. Or if you would like to just build a dedicated standalone machine that's just running Zen Orchestra, you can do that as well. Zen Orchestra runs on Linux and is a tool that allows the orchestration and management of all the XCPNG servers and hosts and pools and configuring a maven, joining pools together. It's all the functionality through a web interface. And that's what we're going to be using. We're going to focus today specifically on the backups. Now there will be people who mentioned that you can run Zen Orchestra even on something as simple as a Raspberry Pi. And while that is true, when you're doing the backups, a big limiting factor for how fast you can back things up is how powerful of a system that the Zen Orchestra is running at. I run it, as I said, inside of one of my hosts as a virtual machine. We're actually going to be doing this whole demo as a single host in a single pool with Zen Orchestra running within that. But it works one to many. And what I mean by that is Zen Orchestra can simultaneously connect to many different pools. And even when you have Zen Orchestra connected to this pool here, if we wanted to have another Zen Orchestra simultaneously connected to the pool, it will connect to that. It has a many, many relationship. I just wanted to bring it up. So if you're ever wondering if you could have more than one Zen Orchestra connected, yes, because then orchestra doesn't save any of the configuration data for the pool. It reads it and any changes you submit are pushed to that particular pool. Whether you change a network setting, whether you change any functionality of the VM or migrate the VM somewhere else, Zen Orchestra is always reading from the pool after it commits a change. Therefore, that's why two Zen orchestras can connect to the same pool, have no idea about each other, but both will work perfectly fine. Now onto the backups. Zen Orchestra lives right here. It's actually a VM inside of this particular machine right here for the demo purposes. But the next part is the remotes. So here's our hosts and pools. That is how it talks to these devices. Remotes are actually, whether this is a hardware based in solos and orchestra or a virtual machine, either way, you are limited by the speed at which you can talk to these remote storage. I do have this set up inside of here. And actually it's got a 25 gig connection, so it's relatively flashed and our NAS storage device is actually going to be a Synology device just because I wanted to do something different. I usually do these with TrueNAS, but yes, it works with TrueNAS. It works with Synology. Really, we're using SMB. It'll work with probably any SMB server you have. The choice is up to you. It also works with NFS, more on the remotes in a moment. Now, when it comes to the remotes, you can have a single remote and connected to a single NAS. You can also have more than one NAS storage. And if you do, if you go over here to the NAS storage, you can have two remotes and maybe you want to have different backups going in different places. All the backups that are created, once again, are not stored at all in Zen Orchestra. They are all saved with their own metadata and a JSON file that says what those are. So if for some reason you've done a whole lot of backups and disaster strikes and Zen Orchestra itself has been lost, no worries because you can grab these files that it puts on there. And we'll show you what the files look like on these NAS storage devices. And that allows you to pull from there and restore or you can even grab and grab any other Zen Orchestra instance. And by the way, something else we do here, this is completely possible, you can have more than one Zen Orchestra connected to the same backup remote storage. This allows a lot of flexibility and is actually how we manage to get things from our production environment in our lab environment and shuffle things back and forth really easily because we just connect them both to one common remote that we refer to as our lab environment. And then we have separate remote center for our production environment. Now, one thing that's coming but not available just yet, but we'll be probably by the end of this year and I'll do a dedicated video once it becomes available. Is there also testing out S3 as a storage target for remotes? And that's pretty cool because now you can easily do your offsites in an S3 compatible storage target. It doesn't have to be Amazon S3. Just has to be an S3 compatible. So that's going to be something coming in the future. When that gets out, I'm pretty excited about it. Now we've kind of talked about what the remotes are, hosted pools. Now let's actually go and set something up. I went to the about right here. You notice it says no support, which gives us the same screen. We're running Zen server 5.83 and XO Web 5.89. This is the latest version as of November of 2021. And the reason it says no support is because I compiled this all from source. And as I said at the beginning, everything I do here is something you can download and compile there's instruction video on how to do that down below. Now there are only one remote setup. There's not multiple ones. So we're going to add a new remote and we have the options of local. Now local means actually save it. If you built a standalone machine, actually saving it on the standalone machine, you built it. You could put a bunch of hard drives in there and have a Zenarkasher machine that's not part of the VM that manages all the other systems. And you could save it locally if you have the storage. If it's in a VM, that seems like a horrible idea because now you're storing the backups inside of it VM to itself. And that's maybe not the best thing. And it does say local, but like I said, that means storing it within that whatever machine or virtual machine runs Zenarkasher, not a real popular option, but it gives you some flexibility if you have some unusual or unique use cases to store it locally. NFS, well the tried and true NFS them quite popular, been around a long time and you can do an NFS remote SMB. Now this was when I did some of my older videos didn't work as well. It works really, really well here today. And we're going to create an SMB share and we have one creator right here. I just named it backups. So we're going to go ahead and put the credentials and everything in and attach an SMB share. This is another one that I have attached and we'll cover later in the video, but for now we'll just leave it like this. We'll give it a name, Synology backup. There's no proxy in here. Address and share. Now this is where you have a few different options for sub folders. And now there's a couple different strategies I want to talk about here. Maybe you want everything in one folder. That's fine. We're going to make a test folder. So we have a place to test it. And actually before we do that, we have to do this because if we, well, let's do it without it. So you can see the error message and we'll fix it. Username is LTS. Throw a password in. Save configuration. And we have an exclamation point because the connection failed. If we did the test logs, we'll probably see failed Xcode 32. No such file or directory. It will not create the directory when you're in here. So if you want to fix that, so right here it's just called test create folder. All right. This little button tests the remote. Hey, look, it passed and now we have test. So it's as simple as that to create it. If we wanted to create and go ahead and we'll do one more create folder. Test two. All right. Now we have tested test two. So let's go ahead and do the same thing again. There's our test two. And if we want to, we could do another one without any subfolders, but you'll see the subfolders that created under. You don't have to create these, but these can be handy if you have different retention policies and strategies of these files that you wanted to keep. So here we have, sounds you back up, sounds you test two. And let's go ahead and create a backup from there. We know both these are enabled. We have no errors on them. Matter of fact, let's go ahead and looked what files didn't get put in there. We get this little file that says keeper. We get this little file that says keeper. This is that test file that wrote to make sure it had permissions. That's one thing it's going to do. You just check those boxes right here, touch these ones right here that says test your remote and your remote appears to be working properly. Now let's go do a backup. Now the first backup type we're going to cover is backup the metadata. And pool metadata and XO config. As I said, Zenorakasha does not have any of the metadata. And what metadata is an XCPNG host is the pool configuration, network configuration, what VMs live, what storage and all the other configuration parameters you have. Obviously this is of big concern when there's a single pool because if it dies, well, you will have to reconfigure all the networks and everything else. Even if you did manage to save the storage, but you lost the install of XCPNG. Yeah, you're going to have to recreate all that data and it's going to be messy. This is an easy, easy way to back this up. You just say backup metadata. So we'll actually say meta and XO backup. XO config itself is the users you've set up inside of Zen Orchestra, the functions that we have, like creating these backup jobs and everything else. All you have to do is rebuild Zen Orchestra again from source if you ever lost it and just restore this XO config file. Now you can download the config file separately, but it's nice when it's just all right here inside the backup. And because it's connected to a pool, only one pool. So we're going to say, hey, go ahead and do this. Give it a schedule. We're going to keep two of them. Pool retention, XO retention means two copies before you purge and then you pick when you want this to run. And I don't know, we'll run it every hour. For example, if you wanted to go crazy, probably not that necessary, but however you want, the scheduling is pretty easy to do which days of the week or switch to like right here, maybe you only want to back it up on Saturday. It's a lot of different options. I just have it run once a day. There's not that many things doing and you can always run it manually if you're ever making changes. So we'll go ahead and OK to that. Select the remotes. Do we want to go to the strategy backup test to in or Trinity to your next lab? Now you may have noticed that was in there, but I don't have it enabled. You can select not enabled remotes that are non functioning, but when the backup runs, you're going to get an error message that the remote is disabled and it'll stop the backup from running. So we're just going to head and say Synology backup report on never always your failure. This is pretty obvious, but if you want a report to be sent to you all the time, probably because for some reason, if it wasn't running at all and it didn't fail, you wouldn't know. So I do like for backup reports, you kind of get a nice history of them. That is controlled within the plugin settings for email on here. I'm not going to get too far into that. It's easy enough to figure out how to put a mail server and for the plugin settings for this. So we'll say failure because it doesn't matter as I don't have the email set up on this and we'll go ahead and create now by default. The scheduling is disabled. You can click enable to enable the schedule or run whatever schedule. We're just going to head run it manually right here. So here's the meta and XO backup and it ran that fast. It doesn't take long to run a metadata backup because they're very small file. So you go back over here. That's going to be under test. And here they are. Metadata JSON file pretty straightforward there. Here's the XO config backup. And here's the metadata around it. Now the metadata is like you said very important because then Orcasha does not save anything. It doesn't need to know anything. If we fired up a new Zen Orcasha pointed it at this remote, it would be go, hey, look for all these metadata files and be able to instantly find, read and understand all the configurations in there. So from a disaster recovering standpoint, you're good to go. And also from a standpoint of management, these are all just files. There's nothing special about backing this up. So this particular Synology NAS, you can replicate to another Synology NAS. You could back up all this data offsite. However you wanted to handle the data, they're just files. And as long as these files exist, you can restore them to a folder, point Zen Orcasha brand new instance at that folder. It'll read through it and go, there's all the data that you save for the backups. Now let's go back over to here and do a VM backups or create a new backup again. This time we'll back up a virtual machine. Now if you go here and we'll give it a name, we'll call this our YouTube test. I'm not going to get too in depth, but pretty much rolling snapshots, technically how to back up that just like it sounds to create snapshots of the VM that on the schedule that you set here, but they all live here. Backup backup. This is a normal backup. That's why this came alive. So now we can say, all right, this is just a normal full backup. Now disaster recovery versus the backup. I'll leave a link to the entire page on this. But essentially when you do a backup, it backs it up to a file when you do disaster recovery and ask the question, as you notice, when we open this, what is the other storage that you would like to send this to? And that can be a completely another Zen server that's not even in the same pool. That means get this VM replicated all the way over somewhere else or even a duplicate of it on this machine, which would probably not be much of a backup either, but maybe a duplicate on a different storage pool. This would allow you to have a at the ready. If that VM gets destroyed, you can always just start up the other one, it'll create it on there and it'll create them simultaneously. The same thing goes for this. If we were to do a delta backup and continuous replication, Delta and continuous are both versions of incremental. And the first delta backup is going to be big because it's the first one of the delta and then every change after that's incremental. Same thing with continuous replication. We can select another pool, another XEP and G host or just in our storage pool or storage target and be able to have these incremental as changes are made in that VM stored over there. So any of these work, I'm really partial to the delta backups are really nice, but sometimes you want a full backup. So let's run a full backup real quick here. Full backup comes with a couple extra options down here. So we have this. Also, if you note, there is a delete first. This is whether or not you have storage problems unless like the VM right here. We have this up on to VM will be doing for the demo. But delete first means before we copy the new backup, if you have a retention policy, and we'll just call it keep two copies and we're not going to worry about the schedule because we're not going to use it right now. Actually, we'll just disable it. So we're going to keep two copies. If you have delete first be when this is, like I said, a storage problem. If you don't have enough storage to get the next one on there, it'll delete the backup before the other one's finished. And if you do it this way for a very brief moment of time, there will be three copies. But once they verify the latest copy, the oldest copy will be purged from the system. That to me is the better way to do it. But if you have some type of storage constraint, that could be the problem. Now down here under advanced settings, same thing, report failure, recipient emails, concurrency, we're only backing up one VM, but you can select multiple VMs here. And if you select multiple VMs, actually, I selected none. Now there we go. One, you want to decide how many concurrently you want to back up. So if you have 10 VMs seems logical to say I want to back up all of them simultaneously. But what kind of load will that put on your system? If your system is really fast, no problem. If you are worried about load, you set a concurrency of one, two, three, however only you think you can back up at once. And you can leave it blank for letting it figure it out. I recommend setting a concurrency that matches your system, or unless you really need them all backed up simultaneously, set a concurrency of one to sit, you know, keep the workload low unless the backups are more critical. Timeout. If the backup for some reasons just stuck and running too long set a timeout in hours, that once again comes on the speed and size of the VM speed of the machine size of the VM means it's going to take longer to back up. But maybe you want everything to go ahead and just cancel after an hour or two hours. If it hasn't completed two hours, I should probably look into what's wrong. That gives you that option if you need it. Compression. Right here, Z standard XTP and G only. This does work with Citrix and server, but only with G zip. Z standard is really fast and it compresses it as opposed to just copying the VM over. So that does make things really nice snapshot normal with memory or offline. Now, this comes down to how you want to back things up. If you want to back things normal, that means just grab a snapshot and it creates a snapshot in time of the VM. It runs the backup and life is good. It's going to do that. But what if you have a bunch of things running inside that VM that probably have a lot of data in flight? Well, that kind of creates a problem. You may want to do an offline snapshot. What that does is it's going to shut down the VM and go ahead and bring it back up. The same thing with these offline backups when you want to do these is the same concept. If you do normal, but then say offline, it all depends on whether or not you want the snapshot or just shut the VM down to do it. They say you got a couple different options here, whether or not you want it just to be normal or shut it down because, well, database transactions. So we're going to go ahead and say do an offline snapshot for this. And just for quick clarifications, offline backup means the VM is when you have this normal and check this exporting VMs without snapshotting them means instead of writing a snapshot at all, the VM is offline until the backup is complete, probably not the most ideal situation. So we'll just do a normal one to get this started and do a backup normal C standard create. All right, there's this backup here. We're going to click and run this. All right. And the backup took two minutes. And this is what the backup log essentially looks like here. You can download the log file, copy it to a clipboard. It'll have if you were doing multiple of which one's failed, which one's skipped, which one just started interrupted, et cetera. It gives you all the details. It's did a verification to make sure this file was there when we started, when it ended for the snapshot, where the target was, et cetera. So this is pretty easy to do. And we can run it again if we want. But what does it look like when we restore it again? Right here. So this is our one two server. We can go to restore select here and all right. Let's we can restore it to right here if we want it. And let's actually go ahead and do a restore. We're going to restore this one right here, generate new MAC address, because I actually want to create a whole another version of this. And we'll go ahead and kick off what the restore process looks like. Now, while the restorer is running, you can actually see it as a task right here. And like all counters, there is a well delay. First, it says there's going to be eight days and now it's down to six minutes. And it's actually going to get faster and faster. I know how fast it takes to do these on this particular machine. Yeah, it's about two minutes. And it says three. It'll it'll be done relatively quickly for this restore process right here. But my mistake actually chose the wrong storage destination. So I canceled it or same thing was going to a different backup. So it goes a little faster. So that's why this little failed is down right here. This one will go substantially faster backing up to this storage. This local source is actually really slow. This local source is actually really fast. Now you can see I let this one run three minutes before I realized I was saying it's a wrong storage. This one says a few seconds. If you mouse over it, it took 43 seconds to do the restore. Like I said, this storage is well, quite a bit faster. So now we have this other VM. So we'll go to the VMs again and we have this one right here. It's tagged restored from backup and we generated an MAC address so I can actually start this one up now and we'll call it about another. The long string of numbers was the time date of the restore. So here's another one to server and it's booting up right now. Let it go ahead and do its thing starting off for the first time takes only slightly longer than when it starts up. So there's our one to server that's running. Here's the other one that's running. All right, and it's booted almost. There we go. Now we have two different servers on here. Actually, of note, they're both tagged as lag. Go ahead and make sure you delete that. Let's talk about another way we can do things that are backed up. So let's go ahead and hit stop. So I want to duplicate this one one more time. Let's clone it. Leave both running. So it's going to be something we'll do towards the end here. Talk about how smart backups work because now I want to jump over to backing up VMs and talking about Delta. So if we do a Delta backup, select remote, go ahead and hit the Synology backup again. Report failure, advanced settings. Notice that the Z-Saner is missing, but the concurrency is there because if we have more than one VM and we'll just do our same up on two original one. And we'll say this time we're going to keep, let's say five, let's go ahead and go all out, 10 copies right here. And we'll put 10 here to back up tension. So this is just a naming. Keep 10 and you can also put in like daily. If you want to run it 10 times a day, however you want to do it, we're not going to actually run the schedule. But forceful backup means you want to force it the next time it runs for full backup. But we're actually going to do that down here. Full backup interval. This says when you're creating the Delta, there is a first initial large backup. Then every incremental change between what was done in the first backup and changes made to the VM are going to be really easy to do, but they require a series of merging things back in on the off chance. Now it has its integrity checking, but on the off chance, there was a corruption. It would actually cascade through there. So maybe every 20 times or more, you wanted to just go ahead and run a full backup again. So this is what the option is there. And the other things there, snapshot offline or normal are still there. And let's go ahead and create because I want to show you what the VM looks like. So each of these VMs, no snapshots on when you do the full backup, it doesn't do a snapshot other than for the time of which it was doing the backup. And then it deletes a snapshot. So there's nothing left over. But when you have an incremental one, the snapshot is how it understands the different backups. So this was our full backup. So the YouTube full backup test save. And we're going to go ahead back up here and go to the Delta and we'll just YouTube Delta backups. All right. Now we got them named. I don't really need the schedule enabled. So a full backup interval 20. So we're going to go ahead and back this up, kick off the Delta and jump over to the task itself that's running. And you can see it's going to take about about a minute to get that exported. We'll leave this pulled up till it's done. All right. The backup task is complete. Now let's go ahead and go here. And I want to log into this server because I well, there's not much going on with it right now. So it's sitting here not doing much. So if we were to run another backup again and we wanted to go and run the backup, there's really not much has changed. We need to log into this particular server to see the change. So we're going back over here to network. This is at 1717 1669 186. And I have and we'll do ahead and do this right here. Run FIO. If you're not familiar, I've got a whole video on how to do benchmarking with it. We're going to create a file and right here is the number you want to see. This is the only one enabled. It's going to create three gigs of randomness on this particular drive, which means when we do the Delta backup, there will be three gigs of changes. So we just go ahead and run this real quick. All right. Now we can run that Delta backup and we should see a different of about three gigs in here. So you to full backup, you to Delta backup. You can run it from here because each backup as I attach it to this VM is going to show here or we go to the backups overview. And right here is the YouTube Delta backup. Doesn't matter where you run it does the same thing. All right. It is kicked off and let's see what the transfer is on it. And the size was 3.17 gigs took only a small amount of time to back up here. So we have a really fast backup says a few seconds. The overall process took 27 seconds to do that backup. And we could do things like reboot the server, reboot this VM, however we want to do it. But if we actually jumped right back in here and hit the backup again, and this is the Delta, so let's run it one more time without doing the three gig file creation and see what happens. It took five seconds to run the backup and we only transferred it found 50 megs of difference while it was sitting here just basically processes running changes, 50 megabytes. That's it. Pretty small amount of data. Now we have actually several backups in here. The last thing I wanted to cover and the reason I created all these is let's do a smart backup because this will allow us to back up quite a few different things simultaneously. So we go over here to backups new backup VMs, but we're going to change it to smart mode. VM status is all select pools. They all belong here. Not resident on you can choose like, you know, resident on or not resident on, but we want to actually just make it really easy. We're going to choose lab because if you choose lab and we can say C matching VMs it actually pops open a new window here. These are all the ones that match the tag of lab. Actually we want to take out while you see backups and lab to go ahead and close that and we don't need that on there. Go back over here. Here's our three matching VMs. You can see it says three right here that match because we VM tagged them lab. You can tag these however you want, put a name in there, tag them all critical devices and have a backup strategy. It's just for those that you can choose what type of backup goes on there. You want to delta backup of all these and that's fine. We'll actually do a delta backup of all these particular ones. So delta all the lab. So there we go. Select the remotes. We're going to say now as you back up there and all these rules here apply to if you want it offline snapshot normal no problems there. Schedule still applies. We're going to keep five five so retain five of all of these and concurrency. I think this system pretty fast. We can do two at a time when we do it. And this will have two backup and then as soon as they're completed that would then grab the third. So that seems reasonable for this server. I could probably actually back up all from this is that super micro server I did a review on. So it will definitely handle more than that. And we'll go ahead and do this. It's going to kick this off and I'll show you what it looks like packing up multiple. Now the cool thing about this particular backup is it will also if we add another machine and still tag it lab we won't have to adjust the backup. It automatically grabs everything with that tag. It says those are the ones you want to back up because they have that tag. So this is actually really nice when you want to mark something like production VMs and you want to add them to backup. You can have a singular backup job that runs and whatever process and schedule you want or multiple versions of it and multiple targets multiple remotes etc. But also whenever someone adds a new VM or you add a new VM to your pool it can just be tagged with production and it automatically grabs that one says maybe this is one of the ones that needs to be on this list. These are where the smart backups are actually really helpful for doing things like that be able to take and give you lots of flexibility to keep going through and backing things up without having to remember or modify the backup shops especially when you talk about and we've got some clients that have hundreds of VMs and this is where this gets really helpful to them because they want them backed up. They don't want to create a backup job for each one of the hundred VMs. They have a backup system that rolls through and grabs all of them at a certain level of concurrency. You can just smart tag them and those get backed up over time. Whatever schedule that is for those particular systems. And in this case we set it up to be in criminal. Now as it's doing you can actually go through. Here's all the backups. Here's the started backups. One of them was skipped. This is an important thing I want to talk about you will run into this depending on how fast of a machine you have. And I like that they added this right to the help right in the backup itself. Backup jobs regularly delete snapshots when snapshots deleted either manually or via backup. It triggers the need for Zen server to coalesce the VDI chain and merge the remaining VDIs and base copies into the chain. This means generally we cannot take too many snapshots on the said VM until Zen server has finished to running the coalesce. This is a really important feature and it's something you'll run into that it's just to protect all the data. All this happens with a series of snapshots. And if they don't coalesce because there's a one leftover from when we ran the backup job before. So if you ran a backup job every minute depending on the fastness or speed I should say fastest isn't where the speed of your storage you will run into a problem where it may not have time to coalesce all the snapshots that needed that were temporarily created back together. So it protects the chain and protects the integrity of the data store in the pool. So you want to least space these out by a few minutes. And if you have a really slow under heavy load storage system you want to space them out further. This comes back to that concurrency and of course the overall load of what the VMs are running on your system. And this is what the final report will look like. Here's all three VMs in the backup. Here's the one that was skipped because that's the one we actually just run a delta on just before we did this. And then there's the other ones that were successful. So they're all staggered right here. Now all these files I've been creating. Let's talk about where they're all at. And let's go inside here and back up and test if we look under XO VM backups. This is where sometimes people get a little bit challenged. If you have these backups we don't know which VMs these are for. Well it's not hard to figure out that detail is actually in this JSON file. And this is the virtual disks that are on there. Matter of fact that one's probably a full backup because I only see one that one's a full let's go back to the top one. Hey there we go. Here's a series of them and there's a check some for the deltas. And there's all the different deltas. There's our 7.6 gig 3.17 when we did the change and then a 50. But obviously turning these UIDs back into something readable can be a little bit challenging if you're looking at it from the back end. This is one of the reasons I suggested you can set this up like for a certain group of them maybe a smart group you have you create a backup job you create a subfolder in that share or maybe just a separate share. And this allows you to save them because maybe you want those backed up and only those backed up off site. But by looking at the file names it's kind of hard to figure out. But if you have an orga shot it does know this is just little details for how you want to think about the backups. The data itself is retained as I said in each one of these old JSON files. And this is what Zen orchestra goes and reads to be able to see this information. Now one last thing and we go back over here to restore. Here's all the different stores. It groups them together for you like this particular server. Here is one full backup and three delta. It makes it pretty easy to go through and see these who when you want to do the restore which one would you like to restore delta delta delta full. And these ones right here only have deltas. And then where you where would you like to restore them to doesn't matter which one throw a group together. And if you wanted to delete these we could just hit. Go ahead and there's delete button right here. I can delete all these backups or you can even group restore them. You can just say grab the latest of all these because I grouped them so it doesn't look at it. So it's just going to pull them by latest. Give a destination and then it will restore three VMs simultaneously. So yes you can do group restores. If you've made some catastrophic oops I deleted all the group called production and I'd like to restore all the group called production. You could grab some of these in here and they are searchable and searchable matters because when you go back over here to our remote and I add one more remote to this pool here and just by enabling and adding this remote we go back over to backups restore. Now there's two pages of things in here. P. F. sense of backups Windows 10 at Kali Linux domain stuff we were testing a Windows server. If I wanted to restore that I could go here pick which version I have an August 20 backup of this particular Windows server. And one last demo that I'll actually do here to show you how the disaster recovery side of things works is we're going to go ahead and just break this version is an orchestra I'm actually going to delete it because I'm done with this particular server. So we'll go here and switch to another one. Now this is a completely different Zen orchestra lab I have that's not on that super micro system I was using and let's go ahead and go in here look at the SMB share and we'll connect it back to that same Synology because once again this has no awareness of that particular system those are all the backups I did just for this particular video but the address in and all these were under that folder test here on the Synology so go ahead and add that back to their test user names LTS put the password in save configuration enabled I have no errors I can test remote this one's not connected quite as fast so it was going to be a little lower numbers over here but we then can go to backup restore and there's those different ones the another test server and matter of fact it's easy enough for me to go here grab one of these I need to follow into this one right here or do I want to save it to hit OK and it'll restore the VM so basically your disaster recovery is I loaded or just grabbed another instance I also loaded it is completely a separate system grabbed it pointed it at that remote and away we go we have all the files now we didn't have anything in the other remote so this one last little test if we go here and we edit it again we take out test we don't use a subfolder save configuration gives an error if you do it while it's not enabled you can save things right to here but now if we go back up to backups and restore it will see them because it does scan subfolders I just wanted to talk about that that's how it can see these when you're in here so hopefully this is a clarification or understanding better of how the Zen orchestra and XPG and backup systems work and leave some questions comments down below or head over to the forums we can have a more in-depth discussion on it there's also a lot more coming we've been working with and this was a live stream that came off last Thursday the team Oliver Lambert head of the team over at Vates was on the live show and well there's some more things coming for the backup system including the potential for doing a mock up restore and test for each one the team is really innovative they are really just on top of new features and things like that that's makes it really great I really enjoyed the backup system is then I've had systems fail because systems fail but the backup system has always worked for me give me the confidence I need to point it back at a directory with a bunch of files in it and say yep I just need that VM and do over there it also makes building these labs so easy because we just have a bunch of VMs all scattered around on these remotes I just grab them drop them on a new server I can be up and running in a few minutes and like I said it's pretty simple it's a lot to wrap your head around at first but hopefully that beginning explainer when you get the nomenclature down also they have great documentation it goes in depth about a lot of different things about how virtualization handles all the different snapshot types and especially if they like you want to dive into deeper in the VDI coalescence there's a lot of good write up and a lot of good material on there alright links down below to their backup section of documentation and my tutorials on how to build ZenOrchishra from source and my whole tutorials on Lawrence Set Technology on ZenOrchishra and XCPNG thanks and thank you for making it all the way to the end of this video if you've enjoyed the content please give us a thumbs up if you would like to see more content from this channel hit the subscribe button and the bell icon if you'd like to hire a short project head over to laurancesystems.com and click the hires button right at the top to help this channel out in other ways there's a join button here for YouTube and a Patreon page where your support is greatly appreciated for deals, discounts and offers check out our affiliate links in the description of all of our videos including a link to our shirt store where we have a wide variety of shirts that we sell and designs come out well randomly so check back frequently and finally our forums forums.laurancesystems.com is where you can have a more in-depth discussion about this video and other tech topics covered on this channel. Thanks again for watching and look forward to hearing from you.