 All right. We are live and I have an exciting announcement that Jay doesn't know. Oliver, I'm saying Oliver, but I think it might be pronounced slightly different because he's from France. He is the head of the XC PENG project and he happens to be in the live stream. Not as a guest on here, but he happens to be in the chat, which I thought was pretty awesome. So Wow, that's really cool. Yeah. So I thought that was cool. So many shout out to the team and Oliver heads that team that puts on the whole XC PENG. The company is Vates. So yes, they're awesome. And you know, that's what we're diving into today was XC PENG, which is unexpected to see him in the chat. So that's great. This is one of the times and reach out to me offline and Oliver and we'll get together and do an interview because, you know, I'm obviously a huge fan of the project and which is one of the reasons we picked it as today's show topic. So I'm a fan too actually. So this would be great. Yeah. Jay's been diving into it as well. Even though we know Jay's using Proxmox, he's been really looking at the whole XC PENG platform as a lot of other people have. So but before we do any of that and get to that exciting news and all the details for XC PENG, we do have to thank a sponsor of the channel and that is Linode. This is literally if you download this podcast, it is brought to you by Linode. They are a very great donor of the show here and sponsor. And how long you been using Linode, Jay, for about how long? I lost count. How long ago was that PENGocon? Oh, I feel like it might have been 2014 or 2015. I think it's longer to go than we thought. So maybe it was longer than I thought it was. Yeah. Ever since they gave out the card, they've given a card out to everyone for free credit and start out. I've loved it ever since and I'm using it for the YouTube channel, Learn Linux TV. So I have it over there. Everything when it comes to the web presence for the channel is on Linode. This podcast is on Linode. They have some great services, Kubernetes. They have a whole marketplace with one click apps and it's a great service. I love it. Yeah. So if you are interested in getting signed up with Linode, we have an offer code down below. In many of the projects, we've talked about it on this HomeLab show. You can try and test out in Linode. All right. Now let's talk about when you want to run your own hypervisor in your own lab. This is, matter of fact, in my forums, this was a long ongoing thread of which hypervisor do I choose? There's so many options. Now, I do lean people towards like, if you have a specific goal in a career path, that means you have to learn a specific proprietary hypervisor and you don't get a choice in the matter. Then maybe you, you know, you don't choose XCPNG, but for the most part, when you have the choice, going with one of the open source ones is huge and XCPNG is actually very used in the enterprise market. This is the myth of open source stuff. It's only for hobbyists like us on the HomeLab show, but there's so much more to it. Now, XCPNG is actually based on the Zen server. That's what that the core of this and it has a history that is, well, I don't know if it was the first, but it was certainly among the earliest hypervisors in the open source world, which is the Zen server core and XCPNG is basically a distribution around that core. So it's, it's like getting a tool, which is Zen server as the core piece of it. XCPNG is building all the pieces you need to actually make a fully working manageable distribution around it so you can have a nice turnkey open source hypervisor and appliance. Now, a little bit of history and for those of you, if you've followed my channel for a while, I actually learned about it because I was using Citrix at the time and Citrix was originally, okay, and Oliver just confirmed something for us. Zen hypervisor is indeed the first open source hypervisor from 2003 and Zen server itself came from 2007. So it's actually the Zen hypervisor specifically. So I like that we have Oliver here. He is an absolute wealth of knowledge on this that he'll drop in the comments we'll read as they come in, but we do live fact checking live fact checking. Yeah. Well, and also enhanced facts. So it is interesting and the history of it's kind of cool. Now Citrix originally was the group of people, I guess you could say the business that was the steward of that and building a distribution. Unfortunately, they did a less than great job created some controversies that don't really relevant to go into the show. You can easily Google and find all this stuff. Basically, they didn't do a great job of it. Now, over there at the team Advate slash XP and G team all together, they made a product called and we'll be talking about this product as well, Zen Orchestra that helps orchestrate and manage with a really slick web UI for the entire Zen system. So what happens is, you know, they've seen an option basically going, well, if Citrix is not doing a good job of it, and we make these tools that work really well that people like that manages the Zen server, why don't we just make a underlying system that works perfectly fine with it, fully open source. So essentially, they started off a Kickstarter and that's how we got to XP and G. There's probably a million details I left out of between, but in the brevity of time, that's where we're at now. Now, this is also an important aspect of it is that there's two separate pieces here. You have XP and G, the distribution you load, and Zen Orchestra, which is the tool that orchestrates and manages all of this. Now, those being two separate things sometimes create some confusion when you're first getting started. And I think if you were looking at some of the different hypervisors, there is a little bit of a larger amount of knowledge that you have. One of the things is when you're doing it like this, you don't have this holistic, I just load one thing and get it all working. Now, future little foreshadowing here, they are working on some upcoming features to make basically Zen Orchestra light as an option that will install with it. That's not where we're at today. That's a product coming. So this will change a little bit at some point in the future when that gets released. And I will be excitedly doing videos on it. But for right now, there are essentially two separate components. Zen Orchestra is one thing and then the Zen server. Now, ideally, you take Zen Orchestra and run it as a virtual machine within the Zen server, the XP and G system. Now, one of the things about doing it like that is that's where that people go, well, now I have an extra VM that just runs the management tool for it. And it may be what seems like a lot of overhead, but what it offers you is a great deal of expandability. Once you understand that there are two separate things, and by the way, even I think might have been UJ who was messing with this, it's not absolutely required that the Zen Orchestra software run inside of XP and G because it just has to talk to it, but it doesn't have to operate as a VM. You can even spin it up as a separate VM elsewhere, wherever you want to have that VM live. But it then has to communicate with the Zen server. As long as it has communication with it, it doesn't have to live inside of it. Ideally, it's easier to set up inside of it because if you have a hypervisor, why virtualize it somewhere else, but it is at least just from a thought process. Yes, you can separate the two. Yep. And that's exactly what I did. So when I went with Proxmox, my original or two of the reasons why I did go with Proxmox, one of them was because, you know, the UI was part of it. It was built in. And the second reason was because they had built in containers, but I love both. I honestly absolutely adore both solutions. But since I went with Proxmox, I kind of walked back my original opinion about it being better for the UI to be built in because with the Zen Orchestra, I can have it on my laptop. And that's what I did when I was playing around with it. I just loaded it on my laptop. And that worked out great because now the CPU, there's no CPU cycles wasted on XCPNG for the UI or management layer of that. It's on my laptop. So there was no VM running in XCPNG at all. So you could actually argue that that gives you more flexibility because you can basically load the management engine or wherever you want to call it, basically Zen Orchestra on whatever device you want. That's within reach of your hypervisor. Yeah. And as someone pointed out in the chat just for reference, because a lot of people are obviously very familiar with the VMware platform, vCenter is in a way very similar to Zen Orchestra, not a direct comparison, I would say probably Zen Orchestra has more features and we'll get into that later. But the other thing that's important from a fundamental standpoint of the way the XCPNG system works is you do not have to have Zen Orchestra running to keep the hypervisor running. They are two separate layers. That's how it can run as a VM. And when you have it set up like this, it gives you a lot of flexibilities because the fundamental at the base of it is all a series of API calls. That means that Zen Orchestra simply makes API calls to the Zen server. The reason that matters is as you scale up and you want to really build, you know, really massive systems, being able to take a single instance of Zen Orchestra or being able to go to the command line and issue API calls to make things happen. You can make VMs move from here to there, you can change settings, you can do all that. It lends itself to very easy scripting. And whether you did it inside of Zen Orchestra or you have some type of cron job that performs a task for you that you've done separately, because you're making API calls, and then Zen Orchestra is reading from those API calls or writing to them depending on what instance you're doing. The truth always lies in the setup of Zen server and everything else is reading from it. And when you get into these large scale systems, this is what Zen was really designed for. And this is what makes XCP and G really popular. I highly recommend spending a few minutes over at the XCP and G blog and look at some of the partnership announcements. This is used at scale in data centers. Some of the consulting work we've done with it also shows it works at scale. We've seen some pretty big installs. I mean, our install is pretty small compared to a lot of the stuff we've consulted on. And I just want to bring that up because, you know, is it ready for the prime time? Because it feels like a new project according to if you looked at when XCP and G was released. But as Oliver pointed out, we have some of the earliest inventions here in the hypervisor world, which the open source one being that Zen hypervisor, it's been around a long time. It's natively built and everything. It's a very tried and true system, very well supported, right in the Linux kernel. XCP and G is just the latest packaging of it to make it more easier to manage. So that's kind of like a little bit of a history and kind of a why Zen server. And these are really important fundamentals when you pick systems that are going to work inside your data center, for example, not just working at home. But that's actually the advantage because this is in use in the market. And it's a by the way, drop in replacement for people that were running Citrix. You can move all your Citrix stuff right over into Zen server and Zen Orchestra will control simultaneously Citrix systems and XCP and G systems. So for those of you that already know about the or we're using it in Citrix, yes, there's a complete migration path so you can get off of there and move over there. So nonetheless, the the history of it, I think matters because it's kind of like how we got here and how extensive a project it is and why it's worth if you're going to pick something to invest some time into because anytime you pick any project like this that's going to run all your hypervisors, there's a time investment you don't want it to be on a project that's kind of dying. It's anything but dying matter of fact. I really think they breathe a lot of new life into it. All right. And I would cover that well, Jay. I think you did. And I would actually go a step further. And you know, because even though I'm a Proxmox user, I'm going to call it like I see it and be unbiased here. I really feel like XCP and G has more momentum lately than Proxmox has. Like I just feel like they're very passionate about what they're doing. They're putting a lot of effort into it. It's not to say that Proxmox developers aren't passionate, but it's just some really exciting things coming in XCP and G they're making constant improvements. It kind of makes me wish I was running it because it's more like status quo. It works. We get some improvements here and there. The XCP and G is really exciting these days. Yeah. It's been fun watching. I'm always excited whenever there's a new update for either the XCP and G Hypervisor or the Zen Orchestra system because there's been a lot of great things on here. So, okay, cool. Here's another announcement. So, inside scoop here right from Oliver, it says, for all of you talking about automation, we will be really soon an Ansible module for Zen Orchestra API that will complete existing Packer and Terraform-compatible tools. And we've talked about Ansible on here. We haven't talked about Terraform. I think we mentioned it in some of our automation episodes if you want to go back to that. But essentially, it's infrastructure all defined by a series of calls and functions and scripting. So, yes, that's some more features you can get into in Terraform your way into a completely built stack. So, I think Oliver needs to be very careful when he says Ansible because he might make me switch. Okay. Yeah, because once you can start scripting everything through Ansible, life gets even easier that you can have an Ansible deploy for your XCP and G systems. Jay is a huge Ansible fan for anyone that's maybe new to this video episode and didn't listen to previous ones or know anything about Jay's channel. Quarter's not an obsession at this point, I think. But I mean, Proxmox, you're able to do the same thing as well. But depending on how it's implemented in XCP and G, honestly, I could go the other direction. I can't wait to check that out when that hits general availability. Yep. All right. Getting started with it. We kind of broke down that there are two separate things, XCP and G versus Zen Orchestra. But of course, then we have, what do I load it on? And it's not too hard to load because it does. It is currently and don't panic, folks. Yes, it's based on CentOS. It's in the red hat world that can change later because we know CentOS is in a different state. Reference Jay's video on that. We can leave a link in there for those who want to know the current status of CentOS. But don't worry, it is still completely supported because XCP and G uses their own repositories. They're completely aware of the situation. We'll just leave it there. Now, because it is based on Linux and it is quite compatible with hardware, but there are hardware compatibility lists you can look up. It runs very well on a multitude of servers. I've seen people post on a wide variety of from self built systems. Matter of fact, one of the things when we did a cluster video, because yes, it has fun features like HA and clustering and, you know, all the things you'd expect out of an enterprise level hypervisor. But I built one on what we called the lackluster Dell cluster. I just took a bunch of Dell desktop computers and built a cluster with shared HA storage and everything on there. You can, you can run this on most anything Linux will run on. So you don't have to spend too much time on that aspect of it. And when you have, let's say, a single drive because you're building this in your home lab, so you don't have a lot, it will partition the drive in a way so you can use both some for the, what they refer to as DOM zero, which is the XCP and G install itself, and then some for storage. Ideally, you're going to want either a system with a series of drives, which rate is supported, as in, you can use ZFS, you can use RAID, you can set it up how you want. It's Linux. Now, it's not going to give you a UI for setting up like ZFS, but ZFS is enabled. And you can't, well, there's a switch to turn it on, I should say, and you can create a ZFS pool and use that for storage. This is only a important aspect. So you got to kind of have to decide how you're going to do the storage, but you can manage it that way. An alternative option. And one of the ones that we're using like in our Dell servers here is if you get a server, an enterprise server, such as a Dell server that has its own RAID, you can use that as well. It's very well supported on that type of server. So if you're buying something used enterprise off of eBay, so you're getting some for a budget or new, if you have the budget for that, it's very broadly compatible with that. It's broadly compatible with the network interface cards. I've seen everything from Chelsea IO to Intel. Intel seem to be the most favored ones in there. That's what we're running right now is Intel. I've had some nuanced problems I've covered before in videos when you use some of the other ones that works, but sometimes just a little quirkiness with non Intel 10 gig cards. But nonetheless, good support on all that. Now, from there, one of the other things that's really important now this comes back to being enterprise hypervisor is shared storage options as in external storage targets, you can load it on a series of machines. And for example, when I did the video with lackluster Dell cluster on clustering, you took three Dell desktop computers, but none of them had much storage in there. So they all have minimal amounts of storage and you don't need a lot of storage on each one of your XP and G systems, but you can have a shared storage such as ice guzzies or NFS being the more popular ones, they support a few others as well. They have experimental support for Ceph and a few other different storage, storage options. But generally speaking, most people are going to go with either NFS or ice guzzies. NFS is probably my preference because I just I know it's just easier to manage from a multitude of reasons like when you're using snapshots. But I've used both the two NAS systems and the Synology systems as a storage target. Those are two popular options and it works really well. This gives you the flexibility to have an external storage that allows you to get all right, and let's say in the case you have more than one system, you set up one system to be the master in each subsequent secondaries systems after that, this will allow you to easily move the VMs between any of the systems because the storage lives where the storage is. And we just have to move the live running machines between each one in a cluster, as long as you have a shared storage between these systems. So it sounds like a lot to grasp, but this is a very well supported feature in there that allows you to do this. And it's like, you can start small as a homelab show, kind of generally lean stores of doing this on one machine, but it's easy to migrate bigger and also having a blend of it. You can have a group of machines that have both local and remote storage shared between all of the clustering. Yep. And I think that goes along with our mentality. Use what you have. I mean, we all would, I'm sure, love to spend some money and buy some great servers, but who has the money to buy a bunch of servers like I'm sure we want to. And I think it just passed by in the chat room, but there was someone that had a NUC, a laptop, an R720 in a cluster. So whatever you have, as long as it supports virtualization extensions, which some CPUs don't. So, you know, that's going to rule out some things, but generally speaking, like Tom said, I mean, you should be able to run it on just about anything. Yeah. Now, once you have the system loaded, it's kind of barren, so to speak, back to that management layer not being on the server. But I can tell you they've done a great job of auto deploying a virtual machine of Xenarchesha. There's a couple of different ways to get Xenarchesha. You can get it fully supported and buy support options from the team over at VATES. This is, you know, a great option. They do have enterprise support, but if you want to compile everything yourself, you can. Or there's even a third option. They offer a basic version of Xenarchesha. It's got not the full feature set, but a basic one for free, already compiled, no knowledge necessary. And what they refer to as a quick deploy system. What this allows you to do is quickly get started. So you can load XEP and G, you go to the IP address, which is literally HTTP there. There's nothing really to do. And it brings up a web page with a quick deploy option. The quick deploy option allows you to quickly get Xenarchesha deployed. And you need to deploy it somewhere. Ideally, like I said, my preference is to run it on XEP and G, but as you pointed out earlier, you can run it separately. But once you have it deployed, that gives you that management interface that you're looking for to actually start deploying and setting up systems and migrating them. Now you only need one Xenarchesha to however many, I don't know what the upper limit is, but it's probably quite huge. You can have a few hundred Xen servers when one instance of Xenarchesha to manage them. So in the example I gave where you have several systems that you may want to cluster together, you don't need Xenarchesha running as a VM on each of them. It only runs on a single one of those systems running only on a single system, but then easily allowing it to control many systems. Now they even go a step further where you can do things like OSS IT, tech company that manages things. We can take our Xenarchesha located here in my office connected via VPN to a client's office and actually manage their servers from our instance here over VPN. That's how it works sometimes in data centers where you have one install that's running, but it actually manages across multiple locations. One of the ways it handles this when you're making the backups, for example, is then they have a series of basically tools that can run, which are referred to as Xenproxies for tasks like backup, which we'll cover a little bit later. But so it really does, from an architectural and design standpoint, allow for scaling out when you do all this. So it's definitely a really slick system. Now if you build it from source, you're going to have to get some of the VMs on there. Okay, right now I've seen Oliver actually give an important piece of information here. Excel limits, we have customers with about 1,500 hosts with Xenarchesha, so it will at least scale to 1,500 servers. So I think unless your home lab is extensively big, I think 1,500 servers is a pretty good limit. What do you think Jay? I think if anybody listening has 1,500 or whatever running in their home lab, they need to write in. They need to write in if you have that many pictures. And I definitely need to see this because that's something. And also, what are you running that requires that many servers? I'm not even going to judge. I just want to see it. Yeah, that is pretty cool. But so once you have all of that, one of the questions that comes up though is the ISO storage. And this is where there is a little of a hang up. And I will kind of comment that I wish there's an easier way to do this. And the one of the reasons you need good ISO management is you're like, all right, I have a ISO and I want to load it because it's generally, you know, we're going to download a bunch to and I want to load a bunch to on there. I want to download Debian or Rocky Linux, whichever one I'm choice. This is where it's a hacky way to do it. I'll bring it up for homeland people. This is not the official way is yes, you can, but there's a limit amount of storage within the Zen server itself where you can drop some of the ISOs into a folder. But what they haven't done is make that kind of really easy from that standpoint. The ideal and the best way to set up an ISO repository is to have a storage server. And I mentioned like Synology or TrueNAS. If you have one of those available, of course, I guess you can use Windows as a storage server as well. You can then point it at a directory full of ISOs and it will connect to it over a SMB share. And so there is a way to do it, but there is also kind of if you spend a little bit time googling ways to get it into the system itself if you're starting from scratch. So I will at least mention that. So it's not, it is one of the tasks that I would say is a little bit challenging because it's not like you upload the ISO to it. It doesn't have an upload ISO manager. So just a little beware of that particular aspect. There's plenty of documentation on how to do it, but that's definitely a little hang up that some people run into. It tripped me up at first, honestly. I got it working. I can't remember what I had to do, but I got it working. So I would like an easier method. Proxmox just announced that with their newest release that you could just put in a URL of an ISO image. It'll just do it for you. But I guess it's probably the case that Pixie booting would most likely work with with XC PNG as well. So yeah, and PXC booting is supported. So they have support for all those different methods out there. But you know, being able to just drop in a download link, yes, that's definitely would be an enhanced feature. And the people that need to hear that are actually in here. So I repeated it. So I guess, however, you have a feature request. Yeah, there we go. Our unofficial feature request mentioned in a podcast. All right, now let's talk about Zen Orchestra and all it does. So as I said, with the separation of it being the hypervisor versus the orchestration tool, the orchestration tool goes well beyond just managing VMs or just displaying what the status of the VM. It makes Zen, in my opinion, stand out from other options because people say, well, how are you backing up Zen server? I like, you know, insert name of their favorite backup product. And I'm like, Zen Orchestra has a backup tool in it. And they're like, what do you mean? It will do snapshots. It will do rolling backups. It'll do continuous replication. There's a lot of different strategies. And I've actually got two videos just talking about disaster recovery planning you can do all around the Zen server. And this is something we implement for our clients, because you can replicate to all the other systems that Zen is able to talk to. And I say all the other systems, but not necessarily cluster. This is actually a really unique feature that I like where you can load Zen Orchestra. And we have two different areas of our office. We have our production that we run XCPNG. And then we have a multitude of lab things that we're doing all the time. I don't think mixing production and lab is a great idea. It is a, you don't want to oops, something. And you also just kind of want to keep those things separate. But we can take one instance of Zen Orchestra, and I can start with something and take my main Zen Orchestra and connect it to a lab. One of the things that allows you to do, we don't have to have the lab actually know anything about the production systems. But if Zen Orchestra can talk to both, you're actually able to move things back and forth between them. Even from a disaster recovery standpoint, you can have a completely separate cluster, but you maybe want certain things linked or sent over there. No problem. You can just take and grab them and send them over. It's a really slick methodology that they have for doing this. Now that integrated backup comes in a couple of different flavors. We can back things up to another Zen server. We can back it up within the cluster of Zen servers we have. We can also create a series of files from it. As in, you can connect it to either an NFS or SMB share, and it will create its own, it has its own directory structure it creates, but it's not proprietary. It's just a series of JSON files and XVA files. And it'll even do a series of revisions, including when you're doing like an incremental backup. So it'll have all the different versions. And then it uses a series of JSON files to understand where all those versions were. So they can be merged back into a master version. This allows you to do really frequent backups all the way to a file level. Take that file level wherever you land it and back it up offsite and have it part of your complete disaster recovery plan. And this is all integrated right into Zen Orchestra, which of course also has its own scheduler system to be able to do all this. So you can actually take and build all this in there and build it all on a schedule, set up notifications and let it run. So this is actually how our backup scheme works. We have things that back up to other servers here. We have things that are backing up as a file, and I'm not using some third party utility. Matter of fact, from any notifications in it, or any type of failures or problems that may occur from that backup running, it's all self contained in the Zen Orchestra system. And you know, reasonably, it works reasonably fast, depending on the speed of your hardware, because it's snapshots, it even has an option I really like, which is to shut down servers to the snapshot for the backup, fire the server back up. And I say it like that, because one of the things that allows you to do if you are dealing with the database server, and you go, okay, I don't really want to deal with scripting. And you should, if you need to, but if you have the option of shutting it down, it's that much easier because it's taking it from a known all files closed state, you can script obviously your database servers, maybe to stop at the time of snapshot. So there's no data in flight. There's a lot of little nuances to that. But if you have the ability to shut it down, it's actually just another checkbox and Zen server where it brings the server down to zero. The moment it hit zero as an off, it snapshots it and tells us to fire right back up. And in the background, it's now finishing that snapshot from being off, and the server's back up and running. So you end up with the absolute minimal amount of server downtime, not a fast system. I mean, Linux boots up really fast. So it becomes kind of a no-brainer for us to pick a time that we're not in use on things. We shut things down that have a lot of databases. And I'm going to say shut it off, go through the shutdown sequence, brings it to zero, go ahead and do that snapshot, bring it back up. It was down for all of about 30 seconds. No big deal. And I have a perfectly offline, no files open, no hangups type of snapshot. So that's a kind of a little bit about the backup system. Now other features that go a lot more extensive with Zen Orchestra is the ability to create virtual lands. And they can be between a series of systems or all within one. Now this gives you kind of an interesting use case, especially if my friends who have these now in place as their security research. So with the security research people, they need to create very locked down private network environments. Zen Orchestra, coordinating this with the Zen server allows you to create a series of backend networks. These backend network functions then allow you to say, all right, we're going to put, let's say, a PSense firewall, and we're going to attach it to one of the network interfaces. Then we're going to create a series of virtual interfaces that don't have a physical out. There's no physical way to find these because they only exist virtually. And then we're going to build a series of VMs behind the PSense. This is a great way to create like such things like mailware research labs. There's no actual connection because the connection is virtual and behind a PSense that's also virtual, you can very tightly control any of the ingress or egress of data that comes out there for monitoring purposes. And you can then have the different machines you create all talking to each other at speed. Another use case we've seen is in data centers that's really interesting. And Jay has actually run into this because we have a mutual, friends like the right word, as you said yesterday, but a mutual client that has decided exposing databases is a good idea. The idea you do this when you have your own colo, if you build an XP and G system, maybe you do it use a physical firewall or virtual and that's kind of up to you. But you then can create all of your different VMs that run your stack. Maybe you have a separate proxy stack versus a separate separate database server stack. Hold on, I can't tell what's ringing. Sorry about that. I didn't realize my browser rings. Sorry. What this allows you to do though is you'd be able to set up an XP and G and then you take all the virtual servers that go behind it and have them all talking to each other at high speed. They're actually talking through virtual network fabric that you set up. This allows you to build some really cool, but also locked down without exposing your database server. So instead of trying to assign all your database servers, the public IP and your other server, you can build all these separate virtual machines that probably for a lot of reasons have to be their own separate control their ingress and egress to the firewall, but then they can all talk to each other on a private network that no one can really tap into unless they are also inside that XP and G machine. So you can kind of see from an architectural standpoint, it's a pretty sensible system when you start building stuff like that. It also gives you really fast communication because they're all on the same. Well, even if they're on different servers, you're only limited by the communication that you have tied between them. But this actually is pretty slick in terms of being able to build out that type of infrastructure. So the next thing is the other part I wanted to mention was the scheduling. Sorry, I got a little distracted by reading some of the comments and there's someone said about the scheduler. There is also the ability in XP and G and Zen Orchestra to do scheduling inside of there. Now, what I mean by scheduling is you can actually pick when you want VMs to run on what servers and migrate them between them. This is all once again controlled by Zen Orchestra, where it orchestrates and be able to do this. So it's it gives you that kind of that flexibility if you had in me and Jay were talking about a project where he wants to power off servers, he can actually if you would full in with Zen Orchestra, you could actually take it set up multiple servers, have them migrate on a schedule for when the servers are on, do your videos you want to do, have them scheduled to migrate back over to another server and even power things down. Being able to put all that into automation scripts from an IT standpoint makes life so much easier. And of course, if you can't hear this is also one of the things I'm excited about with Zen is the levels of automation you can get in there. What do you think, Jay? Yeah, I think that's great. So, you know, automating things like that as far as like which server AVM is running on, you can go to a whole new level, you know, when it comes to HomeLab unless you're running a 24 seven operation in your HomeLab, which you're probably not. I think you sleep at a particular time, you can just have the server and everything shut down overnight and then do wake on LAN to kick everything back up as soon as you wake up or whatever time you normally wake up to save on your power bill. And that's something that I don't think a lot of people really think about, but not just HomeLab even, I mean, just imagine if companies out there that are, you know, an eight to five operation and if they don't, you know, I'm not counting their website or external facing website, but just shutting down things when you're not using them servers, VMs, whatever, that saves power. And I think that's a good idea. And if automation helps you achieve that, then that's a great thing. Yeah, the one thing about it, though, is I will mention this is a HomeLab question. It comes up a lot because HomeLabbers are trying to consolidate everything as much as they can in a one. It is a little bit trickier to do hardware pass through. There are other systems that maybe make it easier, maybe just put a web UI on it. If you were to do hardware pass through, whether you're passing through hard drive, you want to pass through a controller card, you want to pass through a video card because you want to stop gaming. It is a little bit tricky to set that up. It is pretty much command line driven. It's finding out the PCI ports and how they're assigned. And then from there, allowing that hardware to be passed through to a specific hypervisor. Now, if you don't need pass through or you have, and there are cards that support this, both on, you know, your networking or your video cards, you can look up cards that do have support for virtualization of the card, a Vert.io. So there is a lot of support for that in XCPNG, but that also means you have to have specifically a card that does it. And when you talk about the video cards that do it, they're expensive. They are substantially more than you might get a standard like NVIDIA graphics card for and probably even more so when it comes to the whole current situation of 2021 and video cards. So depending on when you're listening to this, you either going, oh, I remember those days or maybe those days are worse in the future. I'm not sure yet. We just know video cards are sold. I think right now, last time I looked at 200% roughly of MSRP and sometimes more. I paid $500 for a 1660 Ti. I don't remember what the MSRP is for that, for a video project. And I was lucky to even see one for sale. It was just a happenstance kind of thing. I'm like, oh my God, a video card on the shelf for sale. Wow. That's amazing. Yeah. Yeah. But overall, let's a couple more details about XCPNG is what about importing things in there? I've done a separate video on this topic because obviously I may have gotten some people excited in this video or podcast about XCPNG, but the pain of switching is always there. One of the things I think people don't take the time to look at and this is why I did a video specifically on CloneZilla. You don't necessarily need to have compatibility of whatever export comes out of one hypervisor and the import type on XCPNG. One of the things I've always preached when it comes to importing, use cloning tools. This works great for Windows. Whatever your favorite cloning tool is for Windows. For Linux, I always prefer CloneZilla. I've been a huge fan of it. And I've imported quite a few different machines right over in there. You just fire up CloneZilla as a boot disk on your outgoing hypervisor and you fire up CloneZilla on XCPNG and literally just say, go there. Clone there. There we go. And it makes a huge difference. It's not that hard to do and it can remove some of that pain of getting things switched over. That's probably the last piece I have for that. What are some of the questions you might have on XCPNGs? I think I've covered pretty much the gamut other than I have a lot of videos that break down the actual visual tutorials that can't really be covered in this podcast. So I used CloneZilla for both Linux and Windows machines quite heavily. I don't know how many people know this but I started kind of like with a Windows help desk kind of environment in my early days. So you were careful to mention Linux when it comes to CloneZilla. Were there any problems you ran into when it comes to CloneZilla and Windows? Sometimes Windows seems less happy with it. I remember trying it and I don't remember exactly what those were. A lot of times because we have, when we move clients, Windows actually has a tool that does P2V and I think I have a separate video on that. If you use the Windows P2V tool, it seemed to work better than the other tools. The other one too is because we back up using some of the backup tools that we traditionally have. When we've brought Windows in, once again we're doing a P2V, we had some clients that we had to virtualize their old system because of legacy reasons and we just used the backup tools we had and just restored it on Zen server which actually worked really, really well. Of note, I'll call it VMware specifically on this and it may be with other hypervisors too but if you're bringing something in from VMware before you clone it, please remove the VMware tools. I have seen them cause some real hangups specifically in Windows that was the one challenge we had before we brought something out of VMware. We actually just uninstalled the VMware tools, backed it up and then reinstalled them because we had to keep it temporarily on the VMware server while we were finishing the migration. Sometimes you're running the little things like that but for the most part if it's a stock thing. The other thing that of note if you're not familiar with Linux and what happens when you change network interfaces that may not match the same, you will have to go to, depending on your flavor, Linux, usually Etsy network and modify the names accordingly because they may not be named the same thing inside of XCPNG that they were named on either your physical box or your previous hardware. So that's just a matter of switching it because I think the more frequent question I've seen is, hey, I migrated from another machine and now my network interface doesn't work what happened and it's just the fact that it has a new name. You're looking for a different, you know, ETH0 or something else now and that naming scheme is something I'll have to figure out. Yep, that is totally true. Going back to the Windows thing, I've only used clone Zilla to move Windows machines and I also ran into some problems but they're not hard to get around so in case anybody needs to know, I'm not going to go over like a high detail, you know, list of things you got to do but basically to summarize it, what's worked for me is I would take a clone Zilla backup before I change anything so that way if I, you know, screw it all up, I can go back and then after the first clone Zilla image, then I go into the Windows install and I sysprep it. That should remove any machine specific things. I also go into the list of programs, remove everything on there that is specific to the hypervisor. You already mentioned the, you know, the tools. So if you have the tools installed for whatever your previous hypervisor was or is, remove those as well. Anything driver specific, just remove that. Sysprep it, shut it down, image it and restore the image on the target and there really shouldn't be any reason why it won't work. The only other issue I ran into, I can't remember what Windows calls it. It's almost like a UUID of some kind. You have to, I think you have to reset that because something in the domain controller thought that every, you know, Windows server was the same even though it's not. I don't know if that's been fixed. That was probably four or five years ago, but whatever that is, I'm sure Windows admins will know. Make sure to delete that too or change it or, you know, zero it out, whatever you have to do. There really shouldn't be any reason that clone Zilla shouldn't work as long as you at least sysprep it. If you don't sysprep it, you are probably asking for a blue screen of death. Yeah, variance. So just, you know, high level or low level, whatever summary of the steps that are required there for anyone that wants to move to XCPNG, clone Zilla is probably the best way to do it. And I've also, as an aside, use clone Zilla for file recovery because if you go to advanced mode, let's just say you can't read from your hard drive, there's actually a checkbox you could check to where if it encounters a bad sector, it'll keep going. And depending on where that bad sector is and what data was there, it might actually trigger a check disk in Windows when it starts up the next time, which could, if luck is on your side, actually fix everything and allow you to restore and repair an entire Windows install in the process. So there's a lot you could do with clone Zilla that I think, you know, may not be immediately obvious. Yeah. And when it comes to support, because I see some questions flying through in the chat and things like that, and this is an important thing that the forums are very, very active. I try to be somewhat active in the XCPNG forums, but I will tell you members of the XCPNG team, they have, they've even got some good write ups and what started as a, they solved some type of BSD problem and they did a nice write up on there. But the engagement you get with both the community of people using Zen and the developers themselves and the forums is great. It's one of the things that I, you got to look at when you're looking at the overall scope of any open source project is, you know, can I get some support or is there good documentation? If you've done a good job on documentation, there's a lot there. And if you feel something's missing, please contribute back to it. I've done my part quite a bit by offering different videos and things like that and tutorials on it. But hey, there's definitely a lot there now. So you can figure out a lot of these things I've been talking about. They have a great, like if you wanted to build from source, boy, they give you step-by-step how to build from source with these n-orchestra tools. They give you a lot of detail. And I mean, I mentioned this in the previous episode, sometimes doing it all by hand is a great learning experience into itself because you want to learn not how just to run a script and run a deploy, that's great from a, I want to get started process, but if you want to learn how all this stuff works on the back end, not only can you look at the source code, they've created great instructions how to do this. And back over to their forums. Like I said, lots of engagement is what you, it's best if I could describe it. So if you have an edge case, you have an idea, they're very open to reasonable scalable ideas. And that's how this product has become so good, so fast is that community engagement back and forth. And I'll give a big shout out just in general to the whole VATES team and then all the developers related to XCPNG, they've been good about that, which is wonderful. That's something we, it's a checkbox. You really want to see checked when you're looking at open source project of, is it really, you know, a well engaged community? And I'm going to say an absolute yes to that. So. Yep. And I can see the passion in the community as well. So I think a lot of comments in the live stream chat also reflect that as well. So yeah, a lot of good things to say. Yeah. So that winds us up for this episode here of XCPNG. I'll be leaving links. Of course, I have a playlist. I'll leave links to the project, the documentation. I mean, you can read through all of this. The goal really here was to just give you some big overview process, maybe introduce a few more people to this platform and, you know, add one more data point to your decision of which hypervisor should I choose, you know, and definitely I would take a long, hard look at this one. I'm a huge fan. I'm an advocate would just say yes, but, you know, weigh in your circumstances and everything else. I don't try to dictate what people should use, but I will give my opinion that I think this is a great project. All right, I'm Lawrence here and I'll talk to you guys next time. We'll record again next Wednesday. Yep, me and Jay. All right. Thank you for joining us and thanks to Lenovo for sponsoring. Appreciate it. Thanks.